SlideShare a Scribd company logo
1 of 205
Download to read offline
Chapter 2 Problem Solving
Prepared by
Mrs. Megha V Gupta
New Horizon Institute of Technology and Management
Steps in building a system to solve a particular
problem
1. Define the problem precisely – find input situations as well
as final situations for an acceptable solution
2. Analyze the problem – find few important features that
may have impact on the appropriateness of various possible
techniques for solving the problem
3. Isolate and represent task knowledge necessary to solve the
problem
4. Choose the best problem-solving technique(s) and apply to
the particular problem
PROBLEMS, PROBLEM SPACES AND SEARCH
Problem solving is a process of generating solutions from observed data.
• A ‘problem space’
The set of all possible configurations is the space of the problem state, also known as problem
space.
The environment where the search is performed is the problem space.
■ A ‘state space’ of the problem is the set of all states reachable from the initial state.
• A ‘search’ refers to the search for a solution in a problem space.
State Space Search
■ A state space represents a problem in terms of states and
operators that change states.
■ A state space consists of:
▪ A representation of the states the system can be in.
▪ A set of operators that can change one state into another state. Often the operators are
represented as programs that change a state representation to represent the new state.
▪ An initial state.
▪ A set of final states; some of these may be desirable, others undesirable. This set is often
represented implicitly by a program that detects terminal states.
Toy Problems
8-puzzle
Water Jug
Missionaries and Cannibals
8-puzzle problem
“It has set of a 3x3 board having 9 block spaces out of which, 8 blocks are
having tiles bearing number from 1 to 8. One space is left blank. The tile
adjacent to blank space can move into it. We have to arrange the tiles in a
sequence.”
The start state is any situation of tiles,
and goal state is tiles arranged in a specific sequence.
Solution: reporting of “movement of tiles” in order to reach the
goal state.
The transition function (direction in which blank space effectively
moves either towards left or right or up or down) generates the
legal state
8-puzzle problem
Chapter 2 Problem Solving 7
Chapter 2 Problem Solving 8
Example
Initial state
4 1 3
2 6
7 5 8
1 2 3
4 5 6
7 8
Goal state
Water-Jug Problem
“You are given two jugs, a 4-gallon one and a 3-gallon one, a
pump which has unlimited water which you can use to fill the
jug, and the ground on which water may be poured. Neither
jug has any measuring markings on it. How can you get
exactly 2 gallons of water in the 4-gallon jug?
Water jug problem
■ A water jug problem: 4-gallon and 3-gallon
- no marker on the bottle
- pump to fill the water into the jug
- How can you get exactly 2 gallons of water
into the 4-gallons jug?
4 3
A state space search
(x,y) : order pair
x : water in 4-gallons x = 0,1,2,3,4
y : water in 3-gallons y = 0,1,2,3
start state : (0,0)
goal state : (2,n) where n = any value
Rules : 1. Fill the 4 gallon-jug (4,-)
2. Fill the 3 gallon-jug (-,3)
3. Empty the 4 gallon-jug (0,-)
4. Empty the 3 gallon-jug (-,0)
Water jug rules
Water jug rules
A water jug solution
4-Gallon Jug 3-Gallon Jug Rule Applied
0 0
0 3 2
3 0 9
3 3 2
4 2 7
0 2 5 or 12
2 0 9 or 11
Solution : path / plan
Solution 3
Chapter 2 Problem Solving 17
Missionaries and Cannibals
Three missionaries and three cannibals
wish to cross the river. They have a small
boat that will carry up to two people.
Everyone can navigate the boat. If at any
time the Cannibals outnumber the
Missionaries on either bank of the river,
they will eat the Missionaries. Find the
smallest number of crossings that will allow
everyone to cross the river safely.
https://www.youtube.com/watch?v=W9NEWxabGmg
Production Rules
Farmer, Wolf, Goat and the Cabbage
https://www.youtube.com/watch?v=go294ZR4Rdg
State Space Representation
Chapter 2 Problem Solving 22
Problem-solving agent
■ Problem-solving agent
■ A kind of goal-based agent
■ It solves problem by
■ finding sequences of actions that lead to desirable states (goals)
■ To solve a problem,
■ the first step is the goal formulation, based on the current situation
■ The algorithms are uninformed
■ No extra information about the problem other than the definition
■ No extra information
■ No heuristics (rules)
Goal formulation
■ The goal is formulated
■ as a set of world states, in which the goal is
satisfied
■ Reaching from initial state -> goal state
■ Actions are required
■ Actions are the operators
■ causing transitions between world states
■ Actions should be abstract enough at a
certain degree, instead of very detailed
■ E.g., turn left VS turn left 30 degree, etc.
Problem formulation
■ The process of deciding
■ what actions and states to consider, given a goal.
■ E.g., driving Amman -> Zarqa
■ in-between states and actions defined
■ States: Some places in Amman & Zarqa
■ Actions: Turn left, Turn right, go straight, accelerate & brake, etc.
■ Because there are many ways to achieve the same goal
■ Those ways are together expressed as a tree
■ Multiple options of unknown value at a point,
■ the agent can examine different possible sequences of actions, and choose
the best
■ This process of looking for the best sequence is called search
■ A search algorithm takes a problem as input and returns a solution(best
sequence) in the form of an action sequence.
“formulate, search, execute”
Once a solution is found, the actions it recommends can be
carried out. This EXECUTION is called the execution phase.
Thus, we have a simple “formulate, search, execute” design
for the agent,
Problem-solving agents
A problem-solving agent first formulates a goal and a problem, searches for a sequence of actions
that would solve the problem, and then executes the actions one at a time. When this is complete, it
formulates another goal and starts over.
Chapter 2 Problem Solving 28
Example: Romania
■ On holiday in Romania; currently in Arad.
■ Flight leaves tomorrow from Bucharest
■ Formulate goal:
■ be in Bucharest
■ Formulate problem:
■ states: various cities
■ actions: drive between cities
■ Find solution:
■ sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
Example: Romania
Well-defined problems and solutions
A problem is defined by 5 components:
■ Initial state
■ Actions
■ Transition model or (Successor functions)
■ Goal Test.
■ Path Cost.
Well-defined problems and solutions
1. The initial state that the agent starts in
2. Actions:
A description of the possible actions available to the agent.
3. Transition model: description of what each action does.
(successor): refer to any state reachable from given state by
a single action
Together the initial state, actions and transition model define
the state space
■ the set of all states reachable from the initial state by any sequence of
actions.
A path in the state space:
■ a sequence of states connected by a sequence of actions.
Well-defined problems and solutions
4. The goal test which determines whether a given state is a goal
state
■ Sometimes there is an explicit set of possible goal states, and the
test simply checks whether the given state is one of them.
■ Sometimes the goal is described by abstract property rather than
explicitly enumerated set of states.
Eg. In Chess, the goal is to reach a state called “checkmate,”
where the opponent’s king is under attack and can’t escape.
Well-defined problems and solutions
5. A path cost function,
■ assigns a numeric cost to each path
■ = performance measure
■ denoted by g
■ to distinguish the best path from others
Usually the path cost is the sum of the step costs of the individual actions (in
the action list)
The solution of a problem is then
■ a path from the initial state to a state satisfying the goal test
Optimal solution
■ the solution with lowest path cost among all solutions
Vacuum world state space graph
■ states? The state is determined by both the agent location and the dirt locations.
■ Initial state: any
■ actions? Left, Right, Suck
■ Transition model: The actions have their expected effects, except that moving Left in the leftmost square,
moving Right in the rightmost square, and Sucking in a clean square have no effect.
■ goal test? no dirt at all locations
■ path cost? Each step costs 1, so the path cost is the number of steps in the path.
Example: The 8-puzzle
■ states? locations of tiles
■ Initial state: Any state can be designated as the initial state.
■ actions? move blank left, right, up, down
■ Transition model: Given a state and action, this returns the resulting state; for
example, if we apply Left to the start state in Figure above, the resulting state
has the 5 and the blank switched.
■ goal test? = goal state (given)
■ path cost? Each step costs 1, so the path cost is the number of steps in the path.
Example: robotic assembly
■ states?: real-valued coordinates of robot joint angles parts of the
object to be assembled
■ actions?: continuous motions of robot joints
■ goal test?: complete assembly
■ path cost?: time to execute
Traveling Salesman Problem(TSP)
States: cities
■ Initial state: A
■ Successor function: Travel from one city to another
connected by a road
■ Goal test: the trip visits each city only once that starts and
ends at A.
■ Path cost: traveling time
Using only four colors, you have to color a planar map so that no two
adjacent regions have the same color.
Initial State: Planar map with no regions colored.
Goal Test: All regions of the map are colored and no two
adjacent regions have the same color.
Successor function: Choose an uncolored region and color it
with a color that is different from all adjacent regions.
Cost function: Could be 1 for each color used.
Airline Travel problems
■ States: Each state obviously includes a location (e.g., an airport) and the current time.
Furthermore, because the cost of an action (a flight segment) may depend on previous
segments, their fare bases, and their status as domestic or international, the state must
record extra information about these “historical” aspects.
■ Initial state: This is specified by the user’s query.
■ Actions: Take any flight from the current location, in any seat class, leaving after the current
time, leaving enough time for within-airport transfer if needed.
■ Transition model: The state resulting from taking a flight will have the flight’s destination as
the current location and the flight’s arrival time as the current time.
■ Goal test: Are we at the final destination specified by the user?
■ Path cost: This depends on monetary cost, waiting time, flight time, customs and immigration
procedures, seat quality, time of day, type of airplane, frequent-flyer mileage
awards, and so on.
QUIZ
■ A vacuum Cleaner world with two location, two sensors -
location and dirt , three actions - left, right and suck will
have a state space with how many possible states ?
■ A) 6
■ B) 8
■ C) 10
■ D) 12
41
Search tree
■ Initial state
■ The root of the search tree is a search node
■ Expanding
■ applying successor function to the current state
thereby generating a new set of states
■ leaf nodes
■ the states having no successors
Fringe : Set of search nodes that have not been
expanded yet.
Tree search example
Search tree
■ The essence of searching
■ in case the first choice is not correct
■ choosing one option and keep others for later
inspection
■ Hence we have the search strategy
■ which determines the choice of which state to
expand
■ good choice ->fewer work -> faster
■ Important:
■ state space ≠ search tree
Search tree
■ A node is having five components:
■ STATE: which state it is in the state space
■ PARENT-NODE: from which node it is generated
■ ACTION: which action applied to its parent-node
to generate it
■ PATH-COST: the cost, g(n), from initial state to
the node n itself
■ DEPTH: number of steps along the path from the
initial state
Implementation: states vs. nodes
■ A state is a (representation of) a physical configuration
■ A node is a data structure constituting part of a search tree includes state, parent
node, action, path cost g(x), depth
■ The Expand function creates new nodes, filling in the various fields and using
the SuccessorFn of the problem to create the corresponding states.
Search strategies
■ A search strategy is defined by picking the order of node
expansion
■ Strategies are evaluated along the following dimensions:
■ Completeness (guarantee to find a solution if there is one): does it
always find a solution if one exists?
■ time complexity (how long does it take to find a solution): number
of nodes generated during the search
■ space complexity (how much memory is needed to perform the
search): maximum number of nodes stored in memory
■ Optimality (does it give highest quality solution when there are
several different solutions): does it always find a least-cost solution?
■ Time and space complexity are measured in terms of
■ b: branching factor of the search tree (max. no. of successors of any node)
■ d: depth of the least-cost solution (shallowest goal node)
■ m: the maximum length of any path in the state space (maximum depth of the state
space)
Measuring problem-solving performance
Chapter 2 Problem Solving 48
Search strategies
■ Uninformed search or blind search
■ no information about the number of steps
■ or the path cost from the current state to the goal
■ is applicable when we only distinguish goal states from
non-goal states.
■ search the state space blindly
■ Informed search, or heuristic search
■ a cleverer strategy that searches toward the goal,
■ based on the information from the current state so far
■ is applied if we have some knowledge of the path cost
or the number of steps between the current state and a
goal.
Uninformed search Methods
strategies that use only the information available in the problem definition. While
searching you have no clue whether one non-goal state is better than any other.
Your search is blind.
■ Breadth-first search
■ Uniform cost search
■ Depth-first search
■ Depth-limited search
■ Iterative deepening search
■ Bidirectional search
Breadth-first search
■ Expand shallowest unexpanded node
Implementation:
■ fringe is a FIFO queue, i.e., new successors go at end of queue
Is A a goal state?
Breadth-first search
Expand:
fringe=[C,D,E]
Is C a goal state?
Expand:
fringe=[D,E,F,G]
Is D a goal state?
Example
BFS
Chapter 2 Problem Solving 53
Properties of breadth-first search
■ Complete? Yes (if b is finite)if the shallowest goal node is at some
finite depth d
■ Time Complexity? b+b2+b3+… +bd + (bd+1 –b) = O(bd+1)
■ Space Complexity? O(bd+1) (keeps every node in memory)
■ Optimal? No, optimal in general (Yes if cost = 1 per step) (if the path
cost is a non-decreasing function of depth of the node)
Space is the bigger problem (more than time)
Breadth First Search
Imagine searching a uniform tree where every state has b successors.
The root of the search tree generates b nodes at the first level,
each of which generates b more nodes, for a total of b2 at the second level.
Each of these generates b more nodes, yielding b3 nodes at the third level,
and so on.
Now suppose that the solution is at depth d.
In the worst case, it is the last node generated at that level.
Then the total number of nodes generated is
b + b2 + b3 + ・ ・ ・ + bd = O(bd) .
(If the algorithm were to apply the goal test to nodes when selected for expansion, rather
than when generated, the whole layer of nodes at depth d would be expanded before
the goal was detected and the time complexity would be O(bd+1).)
Breadth-first search
S
A D
B D A E
C E E B B F
D F B F C E A C G
G C G F
14
19 19 17
17 15 15 13
G 25
11
Chapter 2 Problem Solving 56
Chapter 2 Problem Solving 57
Chapter 2 Problem Solving 58
Uniform Cost Search
Chapter 2 Problem Solving 59
Uniform-cost search
Implementation: fringe = queue ordered by path cost
Equivalent to breadth-first if all step costs all equal.
Complete? Yes, if step cost exceeds some small positive constant
Time? # of nodes with path cost less than of optimal solution.
Space? # of nodes on paths with path cost less than of optimal
solution.
Optimal? Yes, for any step cost.
Chapter 2 Problem Solving 60
Depth-first search
■ Always expands one of the nodes at the deepest
level of the tree
■ Only when the search hits a dead end
■ goes back and expands nodes at shallower levels
■ Dead end -> leaf nodes but not the goal
■ Backtracking search
■ only one successor is generated on expansion
■ rather than all successors
■ fewer memory
Depth-first search
■ Expand deepest unexpanded node
■ Implementation:
■ fringe = Last In First Out (LIFO) queue, i.e., put successors at front
Is A a goal state? queue=[B,C]
Is B a goal state?
Depth-first search
queue=[D,E,C]
Is D = goal state?
queue=[H,I,E,C]
Is H = goal state?
Depth-first search
queue=[I,E,C]
Is I = goal state?
queue=[E,C]
Is E = goal state?
Depth-first search
queue=[J,K,C]
Is J = goal state?
queue=[K,C]
Is K = goal state?
Depth-first search
queue=[C]
Is C = goal state?
queue=[F,G]
Is F = goal state?
Depth-first search
queue=[L,M,G]
Is L = goal state?
queue=[M,G]
Is M = goal state?
Example DFS
Chapter 2 Problem Solving 68
Depth-first search
S
A D
B D A E
C E E B B F
D F B F C E A C G
G C G F
14
19 19 17
17 15 15 13
G 25
1
1
Chapter 2 Problem Solving 69
Properties of depth-first search
■ Complete? No: fails in infinite path or loops
■ complete in finite spaces
■ Time? O(bm) Terrible if m(maximum depth of the state space) can be
much larger than d (the depth of the shallowest solution) and is
infinite if the tree is unbounded
May be much faster than Breadth first search if solutions are dense
■ Space? O(bm), linear space memory requirement is
branching factor(b)*maximum depth(m)
■ Optimal? No (It may find a non-optimal goal first) cannot guarantee
the shallowest solution.
Depth First Search
A depth-first tree search may generate all of the O(bm) nodes in the search tree,
where m is the maximum depth of any node; this can be much greater than the size
of the state space.
A depth-first tree search needs to store only a single path from the root
to a leaf node, along with the remaining unexpanded sibling nodes for each node on
the path. Once a node has been expanded, it can be removed from memory as soon
as all its descendants have been fully explored.
For a state space with branching factor b and maximum depth m, depth-first search
requires storage of only O(bm) nodes.
DFS
Depth-Limited Search
■ Depth-first search is clearly dangerous
• if the tree is very deep, we risk finding a suboptimal solution;
• if the tree is infinite, we risk an infinite loop.
■ The embarrassing failure of depth-first search in infinite state spaces
can be alleviated by supplying depth-first search with a predetermined
depth limit l . That is, nodes at depth are treated as if they have no
successors. This approach is called depth-limited search.
■ Three possible outcomes:
■ Solution
■ Failure (no solution)
■ Cutoff (no solution within cutoff)
Depth-limited search
■ However, it is usually not easy to define the suitable
maximum depth
■ too small ->no solution can be found
■ too large -> the same problems are suffered from
■ Anyway the search is
■ complete
■ but still not optimal
Depth-limited search
S
A D
B D A E
C E E B B F
D F B F C E A C G
G C G F
14
19 19 17
17 15 15 13
G 25
11
depth = 3
3
6
Chapter 2 Problem Solving 76
Chapter 2 Problem Solving 77
Iterative deepening search
■ Usually we do not know a reasonable depth limit in advance.
■ Iterative deepening search repeatedly runs depth-limited search for
increasing depth limits 0, 1, 2, . . .
■ this essentially combines the advantages of depth-first and breadth
first search;
■ the procedure is complete and optimal;
■ the memory requirement is similar to that of depth-first search;
Iterative deepening search
The iterative deepening search algorithm, which repeatedly applies depth limited
search with increasing limits. It terminates when a solution is found or if the depth limited
search returns failure, meaning that no solution exists.
Iterative deepening search l =0
Iterative deepening search l =1
Iterative deepening search l =2
Iterative deepening search l =3
■ Note: We visit top level nodes multiple times. The last (or max depth) level is
visited once, second last level is visited twice, and so on. It may seem expensive,
but it turns out to be not so costly, since in a tree most of the nodes are in the
bottom level. So it does not matter much if the upper levels are visited multiple
times.
■ Number of nodes generated in an iterative deepening search to depth d with
branching factor b:
NIDS = (d+1) b0 +(d) b1 + (d-1)b2 + … + 1bd
Chapter 2 Problem Solving
85
Iterative deepening search
■ For b = 2, d = 3,
■ NBFS = (b1 )+ (b2 )+ bd +(bd+1 –b)= 2+4+23 + (24-2)=6+8+(14)=28
■ NIDS = (d+1) b0+(d) b1+ (d-1)b2 + … + 1bd =(4)*1+(3)*2+(2)* 22+(1)* 23
=4+6+8+8=26
■ iterative deepening is the preferred uninformed search method when the
search space is large and the depth of the solution is not known.
Properties of iterative deepening search
■ Complete? Yes
■ Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)
■ Space? O(bd)
■ Optimal? Yes, if step cost = 1
Iterative deepening search
■Suppose we have a tree having branching factor ‘b’ (number of children of each node),
and its depth ‘d’, i.e., there are bd nodes.
■In an iterative deepening search, the nodes on the bottom level are expanded once,
those on the next to bottom level are expanded twice, and so on, up to the root of the
search tree, which is expanded d+1 times.
■IDDFS takes the same time as that of DFS and BFS, but it is indeed slower than both
as it has a higher constant factor in its time complexity expression.
■IDDFS is best suited for a complete infinite tree
Example IDS
Chapter 2 Problem Solving 89
Bidirectional search
■ Run two simultaneous searches
■ one forward from the initial state another
backward from the goal
■ stop when the two searches meet
■ However, computing backward is difficult
■ A huge amount of goal states
■ at the goal state, which actions are used to
compute it?
■ can the actions be reversible to computer its
predecessors?
Chapter 2 Problem Solving 90
Chapter 2 Problem Solving 91
Comparing search strategies
Informed Search Methods
■ How can we make use of other knowledge about the
problem to improve searching strategy?
■ Map example:
■ Heuristic: Expand those nodes closest in “straight-line” distance to goal
■ 8-puzzle:
■ Heuristic: Expand those nodes with the most tiles in place
Heuristic
■ Heuristics (Greek heuriskein = find, discover): "the study of
the methods and rules of discovery and invention".
■ Heuristic - a “rule of thumb” used to help guide search
■ often, something learned experientially and recalled when needed
■ Heuristic Function - function applied to a state in a search
space to indicate a likelihood of success if that state is
selected
Heuristic function
■A heuristic function at a node n is an estimate of the optimum cost from
the current node to a goal. It is denoted by h(n).
■h(n) = estimated cost of the cheapest path from node n to a goal node
Example: We want a path from Kolkata to Guwahati
Heuristic for Guwahati may be straight-line distance between Kolkata
and Guwahati
h(Kolkata) = euclidean Distance(Kolkata, Guwahati)
Heuristics can also help speed up exhaustive, blind search, such as depth-first and breadth-first search.
Informed Search Methods
■ How can we make use of other knowledge about the
problem to improve searching strategy?
■ Map example:
■ Heuristic: Expand those nodes closest in “straight-line” distance to goal
■ 8-puzzle:
■ Heuristic: Expand those nodes with the most tiles in place
Chapter 2 Problem Solving 97
1 2 3
6 5
7 8 4
Which
move is
best?
rig
ht
1 2 3
5
7 8 4
1 2 3
6 5
7 8 4
1 2 3
6 5
7
8
4
6
1 2 3
7 6 5
8 4
GOAL
A Simple 8-puzzle heuristic
Chapter 2 Problem Solving
98
Another approach
■ Number of tiles in the incorrect position.
■ This can also be considered a lower bound on the number of moves from a
solution!
■ The “best” move is the one with the lowest number returned by the heuristic.
1 2 3
6 5
7 8 4
rig
ht
1 2 3
5
7 8 4
1 2 3
6 5
7 8 4
1 2 3
6 5
7
8
4
6
1 2 3
7 6 5
8 4
GOAL
h=2 h=4 h=3
heuristics
E.g., for the 8-puzzle:
■ h1(n) = number of misplaced tiles
■ h2(n) = total Manhattan distance
(i.e., sum of the distances of the tiles
from the goal position)
■ h1(S) = 8
■ h2(S) = 3+1+2+2+2+3+3+2 = 18
Best-first search
■ Idea: use an evaluation function f(n) for each node
■ f(n) provides an estimate for the total cost.
🡪 Expand the node n with smallest f(n).
■ Implementation:
Order the nodes in fringe increasing order of cost.
■ Special cases:
■ greedy best-first search
■ A* search
Best-First Search
■ Use an evaluation function f(n).
■ Always choose the node from fringe that has the lowest f
value.
3 5 1
3 5 1
4 6
Greedy best-first search
■ f(n) = h(n) estimate of cost from n to goal
■ e.g., f(n) = straight-line distance from n to Bucharest
■ Greedy best-first search expands the node that
appears to be closest to goal.
Romania with straight-line dist.
Greedy best-first search example
Greedy best-first search example
Properties of greedy best-first search
■ Complete? No – can get stuck in loops.
■ Time? O(bm), but a good heuristic can give dramatic
improvement
■ Space? O(bm) - keeps all nodes in memory
■ Optimal? No
e.g. Arad->Sibiu->Rimnicu Vilcea->Pitesti->Bucharest is
shorter!
E.g. Route finding problem
S is the starting state, G is the goal state. Let us run the greedy search algorithm
for the graph given in Figure a. The straight line distance heuristic estimates for
the nodes are shown in Figure b.
Figure a Figure b
Chapter 2 Problem Solving 108
Chapter 2 Problem Solving 109
A* search
■ Hart, Nilsson & Rafael 1968 Best first search with f(n) = g(n) + h(n)
■ Idea: avoid expanding paths that are already expensive
■ Evaluation function f(n) = g(n) + h(n)
g(n) = cost so far to reach n
h(n) = estimated cost from n to goal
f(n) = estimated total cost of path through n to goal
A* Shortest Path Example
Chapter 2 Problem Solving
111
A* Shortest Path Example
Chapter 2 Problem Solving 112
A* Shortest Path Example
Chapter 2 Problem Solving 113
A* Shortest Path Example
Chapter 2 Problem Solving 114
A* algorithm
Insert the root node into the queue
While the queue is not empty
Dequeue the element with the highest priority
(If priorities are same, alphabetically smaller path is
chosen)
If the path is ending in the goal state,
print the path and exit
Else
Insert all the children of the dequeued element, with
f(n) as the priority
Chapter 2 Problem Solving 116
Chapter 2 Problem Solving 117
Chapter 2 Problem Solving 118
Chapter 2 Problem Solving
119
A* Example
Consider the search problem below with start state S and goal state G. The transition costs are next to the edges, and the heuristic
values are next to the states. What is the final cost using A* search.
Chapter 2 Problem Solving 120
8 Puzzle Example
■ f(n) = g(n) + h(n)
■ What is the usual g(n)?
■ two well-known h(n)’s
■ h1 = the number of misplaced tiles
■ h2 = the sum of the distances of the tiles from their goal positions,
using city block distance, which is the sum of the horizontal and
vertical distances
Applying A* on 8-puzzle
2 8 3
1 6 4
7 5
1 2 3
8 4
7 6 5
goal
2 8 3
1 4
7 6 5
2 8 3
1 6 4
7 5
2 8 3
1 6 4
7 5
2 3
1 8 4
7 6 5
2 8 3
1 4
7 6 5
2 8 3
1 4
7 6 5
0+4=4
5+1=6 4 6
1st
2nd
5 5 6
Heuristic: No. of misplaced tiles
Chapter 2 Problem Solving 123
A*: admissibility
■ If search algorithm is admissible, if for any graph it terminates in an optimal
path from start state to goal state if path exists.
■ A heuristic function is admissible(terminates with optimal path) if it satisfies the
following property:
h’(n) ≤ h*(n) (heuristic function underestimates the true cost)
■ h’(n) has to be an optimistic estimator; it never has to overestimate h*(n).
■ h*(n) –cost of the cheapest solution path from n to goal node
■ If h(n) is admissible then search will find optimal solution.
{
125
For example, suppose you're trying to drive from Chicago to New York and your
heuristic is what your friends think about geography. If your first friend says, "Hey,
Boston is close to New York" (underestimating), then you'll waste time looking at
routes via Boston. Before long, you'll realise that any sensible route from Chicago to
Boston already gets fairly close to New York before reaching Boston and that actually
going via Boston just adds more miles. So you'll stop considering routes via Boston
and you'll move on to find the optimal route. Your underestimating friend cost you a
bit of planning time but, in the end, you found the right route.
Suppose that another friend says, "Indiana is a million miles from New York!"
Nowhere else on earth is more than 13,000 miles from New York so, if you take your
friend's advice literally, you won't even consider any route through Indiana. This
makes you drive for nearly twice as long and cover 50%
Memory Bounded Heuristic Search: Recursive BFS
■ How can we solve the memory problem for
A* search?
■ Idea: Try something like depth first search,
but let’s not forget everything about the
branches we have partially explored.
■ We remember the best f-value we have
found so far in the branch we are deleting.
RBFS:
RBFS changes its mind
very often in practice.
This is because the
f=g+h become more
accurate (less optimistic)
as we approach the goal.
Hence, higher level nodes
have smaller f-values and
will be explored first.
Problem: We should keep
in memory whatever we can.
best alternative
over fringe nodes,
which are not children:
i.e. do I want to back up?
Chapter 2 Problem Solving 127
Recursive best-first
■ It is a recursive implementation of best-first, with linear
spatial cost.
■ It forgets a branch when its cost is more than the best
alternative.
■ The cost of the forgotten branch is stored in the parent node
as its new cost.
■ The forgotten branch is re-expanded if its cost becomes the
best once again.
Local search algorithms and optimization problems
■ Local search algorithms operate using a single current
state and generally move only to neighbors of that state.
■ In addition to finding goals, these algorithms are useful for
solving optimization problems in which aim is to find the
best state according to an objective function.
■ In LS, there is a function to evaluate the quality of the
states, but this is not necessarily related to a cost.
Local search and optimization
■ Local search
■ Keep track of single current state
■ Move only to neighboring states
■ Ignore paths
■ Advantages:
■ Use very little memory
■ Can often find reasonable solutions in large or infinite
(continuous) state spaces.
■ “Pure optimization” problems
■ All states have an objective function
■ Goal is to find state with max (or min) objective value
■ Does not quite fit into path-cost/goal-state formulation
■ Local search can do quite well on these problems.
Local search algorithms
■ These algorithms do not systematically explore all the state space.
■ The heuristic (or evaluation) function is used to reduce the search
space (not considering states which are not worth being explored).
■ Algorithms do not usually keep track of the path traveled. The
memory cost is minimal.
132
Hill Climbing (Greedy Local Search)
■Searching for a goal state = Climbing to the top of a hill
■Heuristic function to estimate how close a given state is to a
goal state.
■ Children are considered only if their evaluation function is better
than the one of the parent (reduction of the search space).
134
135
136
137
Simple Hill Climbing
Algorithm
1. Evaluate the initial state.
2. Loop until a solution is found or there are no
new operators left to be applied:
− Select and apply a new operator
− Evaluate the new state:
* goal → quit
* better than current state → new current state
* not better->try new operator
139
140
141
142
143
144
Different regions in the State Space
• Local maximum: It is a state which is better than its neighboring state
however there exists a state which is better than it(global maximum).
This state is better because here the value of the objective function is
higher than its neighbors.
• Global maximum : It is the best possible state in the state space
diagram. This because at this state, objective function has highest
value.
• Plateau/flat local maximum : It is a flat region of state space where
neighboring states have the same value.
• Ridge : It is region which is higher than its neighbours but itself has a
slope. It is a special kind of local maximum.
• Current state : The region of state space diagram where we are
currently present during the search.
• Shoulder : It is a plateau that has an uphill edge.
Chapter 2 Problem Solving 145
Chapter 2 Problem Solving 146
Chapter 2 Problem Solving 147
148
A state that is better than all of its neighbours, but not better than global maximum.
149
150
A flat area of the search space in which all neighboring states have the same value. To get rid of
plateau, make a big jump to try to get in a new section.
151
Ridges (result in a sequence of local maxima) The orientation of the high region, compared to
the set of available moves, makes it impossible to climb up.
152
153
Steepest-Ascent Hill Climbing (Gradient Search)
• Standard hill-climbing search algorithm
– It is a simple loop which search for and select any operation that
improves the current state.
• Steepest-ascent hill climbing or gradient search
– Is a loop that continuously moves in the direction of increasing
value. (Terminates when peak is reached)
– The best move (not just any one) that improves the current state
is selected.
■ Considers all the moves from the current state.
■ Selects the best one as the next state.
Steepest-Ascent Hill Climbing (Gradient Search)
156
Unlike simple hill climbing search, It considers all the successive nodes, compares them, and
choose the node which is closest to the solution. Steepest ascent hill climbing search is
similar to best-first search because it focuses on each node instead of one.
157
158
Stochastic hill climbing does not focus on all the nodes. It selects one node at random
and decides whether it should be expanded or search for a better one.
Random restart Hill climbing
159
Random-restart algorithm is based on try and try strategy. It iteratively searches the
node and selects the best one at each step until the goal is not found.
The success depends most commonly on the shape of the hill. If there are few plateaus,
local maxima, and ridges, it becomes easy to reach the destination.
Blocks World
In this problem, a set of initial arrangement of eight blocks
is provided. We have to reach the GOAL arrangement by
moving blocks in a systematic order. States are to be
evaluated using heuristic , so that we can get next best
node by applying Steepest Ascent Hill Climbing technique.
Two Heuristics are considered : (i) LOCAL (ii) GLOBAL.
Both the function will try to maximize the score/cost of
each state.
LOCAL
Cost/score of goal state is 8 (using local heuristic),
because all the blocks are at its correct position.
I
J
K
L
M
Chapter 2 Problem Solving 162
Now J is
current new
state with
score 6 > cost
of I (4).
So , In step 2
three moves
from best
state J is
possible.
Chapter 2 Problem Solving 163
All the neighbors of node J have lower score than value of J i.e 4 , so J is a local maxima, and further no move
is possible from states K, L and M. So search falls in TRAP situation. To overcome the above problem of Local
function, we can apply GLOBAL heuristic.
Chapter 2 Problem Solving 164
As the value of any structure maximizes, we will be nearer to the goal state.
J
K
L
M
I
Chapter 2 Problem Solving
165
Chapter 2 Problem Solving 166
Now goal state will have score /cost of
28 and Initial state will have cost of -28.
Again the best node in next
move will be that which has
maximum score/cost.
Further from state M we can have
following moves :
(i) PUSH block G on block A
(ii) PUSH block G on block H
(iii) PUSH block H on block A
(iv) PUSH block H on block G
(v) PUSH block A on block H
(vi) PUSH block A on block G BACK.
(vii) PUSH block G on TABLE…and so
on we select best node till we get
structure with score of + 28.
GLOBAL APPROACH
Simulated Annealing Analogies
167
Metal Annealing Toy Analogy
Simulated Annealing
• A variation of hill climbing in which, at the beginning of the process,
some downhill moves may be made.
• Lower the chances of getting caught at a local maximum, or plateau,
or a ridge.
• It is inspired by the physical process of controlled cooling (crystallization, metal
annealing):
■ A metal is heated up to a high temperature and then is progressively cooled in a controlled
way until some solid state is reached.
■ If the cooling is adequate, the minimum-energy structure (a global minimum) is obtained.
■ Annealing schedule: if the temperature is lowered sufficiently slowly, then the goal will be
attained.
Simulated Annealing
■ It is a stochastic hill-climbing algorithm (stochastic local search,
SLS):
■ A successor is selected among all possible successors according to a
probability distribution.
■ The successor can be worse than the current state.
■ A Physical Analogy:
Imagine the task of getting a ping-pong ball into the deepest crevice in a bumpy
surface. If we just let the ball roll, it will come to rest at a local minimum. If we
shake the surface, we can bounce the ball out of the local minimum. The trick is
to shake just hard enough to bounce the ball out of local minima but not hard
enough to dislodge it from the global minimum. The simulated-annealing solution
is to start by shaking hard (i.e., at a high temperature) and then gradually reduce
the intensity of the shaking (i.e., lower the temperature).
Simulated annealing
• Main idea: Steps taken in random directions do not decrease (but actually
increase) the ability of finding a global optimum.
• Disadvantage: The structure of the algorithm increases the execution time.
• Advantage: The random steps possibly allow to avoid small “hills”.
• Temperature: It determines (through a probability function) the amplitude of
the steps, long at the beginning, and then shorter and shorter.
• Annealing: When the amplitude of the random step is sufficiently small not to
allow to descend the hill under consideration, the result of the algorithm is said
to be annealed.
• If the move improves the situation, it is always accepted. Otherwise, the
algorithm accepts the move with some probability less than 1.
• The probability decreases exponentially with the “badness” of the move—the
amount ΔE by which the evaluation is worsened.
• If the schedule lowers T slowly enough, the algorithm will find a global
optimum with probability approaching 1.
Simulated annealing
Chapter 2 Problem Solving 171
https://www.youtube.com/watch?v=NI3WllrvWoc
Simulated annealing
function SIMULATED-ANNEALING( problem, schedule) return a solution state
input: problem, a problem schedule, a mapping from time to temperature
local variables: current, a node.
next, a node.
T, a “temperature” controlling the probability of downward steps
current ← MAKE-NODE(INITIAL-STATE[problem])
for t ← 1 to ∞ do
T ← schedule[t]
if T = 0 then return current
next ← a randomly selected successor of current
∆E ← VALUE[next] - VALUE[current]
if ∆E > 0 then current ← next
else current ← next only with probability e∆E /T
t=t+1
Terminology from the physical problem is often used. Downhill moves are accepted readily early in the annealing schedule and then less
often as time goes on. The schedule input determines the value of the temperature T as a function of time.
Probabilty calculation
• The probability also decreases as the “temperature” T
goes down:
“bad” moves are more likely to be allowed at the start
when T is high, and they become more unlikely as T
decreases.
Chapter 2 Problem Solving 173
Simulated annealing
• Aim: to avoid local optima, which represent a problem in hill climbing.
• Solution: to take, occasionally, steps in a different direction from the one in
which the increase (or decrease) of energy is maximum.
Simulated annealing: conclusions
■ It is suitable for problems in which the global optimum is
surrounded by many local optima.
■ It is suitable for problems in which it is difficult to find a
good heuristic function.
■ Determining the values of the parameters can be a problem
and requires experimentation.
Local Beam Search
177
178
179
Local beam search
In the Stochastic beam search instead of choosing the best k individuals, it
selects k number of the individuals at random; the individuals with a better
evaluation are more likely to be chosen.
This is done by making the probability of being chosen as a function of the
evaluation function.
Stochastic beam search tends to allow more diversity in the k individuals than
does plain beam search.
Stochastic beam Search: Genetic Algorithms(GA)
Chapter 2 Problem Solving
181
Genetic algorithms
■ A genetic algorithm (GA) is a variant of stochastic beam search, in
which two parent states are combined.
■ Inspired by the process of natural selection:
■ Living beings adapt to the environment thanks to the characteristics
inherited from their parents.
■ The possibility of survival and reproduction are proportional to the
goodness of these characteristics.
■ The combination of “good” individuals can produce better adapted
individuals.
Genetic algorithms
■ To solve a problem via GAs requires:
■ The size of the initial population:
■ GAs start with a set of k states randomly generated
■ A strategy to combine individuals
■ The representation of the states (individuals):
■ A function, which measure the fitness of the states
■ Operators, which combine states to obtain new states
■ Cross-over and mutation operators
183
Genetic algorithms: algorithm
■ Steps of the basic GA algorithm:
1. N individuals from current population are
selected to form the intermediate population
(according to some predefined criteria).
2. Individuals are paired and for each pair:
a) The crossover operator is applied and two new
individuals are obtained.
b) New individuals are mutated
■ The resulting individuals form the new
population.
■ The process is iterated until the population
converges or a specific number of iteration has
passed.
Genetic Algorithms
Population
Chapter 2 Problem Solving 185
Fitness
Selection
Chapter 2 Problem Solving 186
Crossover
Mutation
Chapter 2 Problem Solving 187
8-Queens Problem
Chapter 2 Problem Solving 188
Solving 8-queens problem using Genetic algorithms
■ An 8-queens state must specify the positions of 8 queens, each in
a column of 8 squares each in the range from 1 to 8.
■ Each state is rated by the evaluation function or the fitness
function.
■ A fitness function should return higher values for better states, so,
for the 8-queens problem the number of non-attacking pairs of
queens is used (8*7/2) =28 for a solution).
2
Solving 8-queens problem using Genetic algorithms
Chapter 2 Problem Solving 190
Representing individuals
Chapter 2 Problem Solving 191
Generating an initial population
Chapter 2 Problem Solving 192
Fitness calculation
Chapter 2 Problem Solving 193
Apply a Fitness function
■ 24/(24+23+20+11) = 31%
■ 23/(24+23+20+11) = 29% etc
Chapter 2 Problem Solving
194
Selection
Chapter 2 Problem Solving 195
196
197
198
199
Stochastic Universal sampling
Chapter 2 Problem Solving 200
Genetic algorithms
■ Fitness function: number of non-attacking pairs of queens (min = 0, max =(8 × 7)/2 = 28)
4 states for
8-queens
problem
2 pairs of 2 states
randomly selected based
on fitness. Random
crossover points selected
New states
after crossover
Random
mutation
applied
Genetic algorithms: 8-queens problem
■ The initial population in (a)
■ is ranked by the fitness function in (b),
■ resulting in pairs for mating in (c).
■ They produce offspring in (d),
■ which are subject to mutation in (e).
Summary: Genetic Algorithm
Chapter 2 Problem Solving 203
Genetic algorithms
Has the effect of “jumping” to a completely different new
part of the search space (quite non-local)
Genetic algorithms: application
■ In practice, GAs have had a widespread
impact on optimization problems, such as:
■ circuit layout
■ scheduling
205

More Related Content

Similar to Steps to Solve Problems in Chapter 2

Informed Search Techniques new kirti L 8.pptx
Informed Search Techniques new kirti L 8.pptxInformed Search Techniques new kirti L 8.pptx
Informed Search Techniques new kirti L 8.pptxKirti Verma
 
Search-Beyond-Classical-no-exercise-answers.pdf
Search-Beyond-Classical-no-exercise-answers.pdfSearch-Beyond-Classical-no-exercise-answers.pdf
Search-Beyond-Classical-no-exercise-answers.pdfMrRRThirrunavukkaras
 
2.Problems Problem Spaces and Search.ppt
2.Problems Problem Spaces and Search.ppt2.Problems Problem Spaces and Search.ppt
2.Problems Problem Spaces and Search.pptDr. Naushad Varish
 
Artificial Intelligence_Anjali_Kumari_26900122059.pptx
Artificial Intelligence_Anjali_Kumari_26900122059.pptxArtificial Intelligence_Anjali_Kumari_26900122059.pptx
Artificial Intelligence_Anjali_Kumari_26900122059.pptxCCBProduction
 
AI3391 ARTIFICAL INTELLIGENCE Session 5 Problem Solving Agent and searching f...
AI3391 ARTIFICAL INTELLIGENCE Session 5 Problem Solving Agent and searching f...AI3391 ARTIFICAL INTELLIGENCE Session 5 Problem Solving Agent and searching f...
AI3391 ARTIFICAL INTELLIGENCE Session 5 Problem Solving Agent and searching f...Asst.prof M.Gokilavani
 
Dynamic Programming
Dynamic ProgrammingDynamic Programming
Dynamic Programmingcontact2kazi
 
Problem solving in Artificial Intelligence.pptx
Problem solving in Artificial Intelligence.pptxProblem solving in Artificial Intelligence.pptx
Problem solving in Artificial Intelligence.pptxkitsenthilkumarcse
 
AI_03_Solving Problems by Searching.pptx
AI_03_Solving Problems by Searching.pptxAI_03_Solving Problems by Searching.pptx
AI_03_Solving Problems by Searching.pptxYousef Aburawi
 
Heuristic or informed search
Heuristic or informed searchHeuristic or informed search
Heuristic or informed searchHamzaJaved64
 
Heuristic Search Techniques {Artificial Intelligence}
Heuristic Search Techniques {Artificial Intelligence}Heuristic Search Techniques {Artificial Intelligence}
Heuristic Search Techniques {Artificial Intelligence}FellowBuddy.com
 
ProblemSolving(L-2).pdf
ProblemSolving(L-2).pdfProblemSolving(L-2).pdf
ProblemSolving(L-2).pdfAQSA SHAHID
 

Similar to Steps to Solve Problems in Chapter 2 (20)

Informed Search Techniques new kirti L 8.pptx
Informed Search Techniques new kirti L 8.pptxInformed Search Techniques new kirti L 8.pptx
Informed Search Techniques new kirti L 8.pptx
 
Lecture 3 Problem Solving.pptx
Lecture 3 Problem Solving.pptxLecture 3 Problem Solving.pptx
Lecture 3 Problem Solving.pptx
 
CH2_AI_Lecture1.ppt
CH2_AI_Lecture1.pptCH2_AI_Lecture1.ppt
CH2_AI_Lecture1.ppt
 
Lec2 state space
Lec2 state spaceLec2 state space
Lec2 state space
 
Hill climbing algorithm
Hill climbing algorithmHill climbing algorithm
Hill climbing algorithm
 
Search-Beyond-Classical-no-exercise-answers.pdf
Search-Beyond-Classical-no-exercise-answers.pdfSearch-Beyond-Classical-no-exercise-answers.pdf
Search-Beyond-Classical-no-exercise-answers.pdf
 
2.Problems Problem Spaces and Search.ppt
2.Problems Problem Spaces and Search.ppt2.Problems Problem Spaces and Search.ppt
2.Problems Problem Spaces and Search.ppt
 
Rai practical presentations.
Rai practical presentations.Rai practical presentations.
Rai practical presentations.
 
Artificial Intelligence_Anjali_Kumari_26900122059.pptx
Artificial Intelligence_Anjali_Kumari_26900122059.pptxArtificial Intelligence_Anjali_Kumari_26900122059.pptx
Artificial Intelligence_Anjali_Kumari_26900122059.pptx
 
Lecture 2
Lecture 2Lecture 2
Lecture 2
 
AI3391 ARTIFICAL INTELLIGENCE Session 5 Problem Solving Agent and searching f...
AI3391 ARTIFICAL INTELLIGENCE Session 5 Problem Solving Agent and searching f...AI3391 ARTIFICAL INTELLIGENCE Session 5 Problem Solving Agent and searching f...
AI3391 ARTIFICAL INTELLIGENCE Session 5 Problem Solving Agent and searching f...
 
Dynamic Programming
Dynamic ProgrammingDynamic Programming
Dynamic Programming
 
l2.pptx
l2.pptxl2.pptx
l2.pptx
 
l2.pptx
l2.pptxl2.pptx
l2.pptx
 
Problem solving in Artificial Intelligence.pptx
Problem solving in Artificial Intelligence.pptxProblem solving in Artificial Intelligence.pptx
Problem solving in Artificial Intelligence.pptx
 
AI_03_Solving Problems by Searching.pptx
AI_03_Solving Problems by Searching.pptxAI_03_Solving Problems by Searching.pptx
AI_03_Solving Problems by Searching.pptx
 
Heuristic or informed search
Heuristic or informed searchHeuristic or informed search
Heuristic or informed search
 
Heuristic Search Techniques {Artificial Intelligence}
Heuristic Search Techniques {Artificial Intelligence}Heuristic Search Techniques {Artificial Intelligence}
Heuristic Search Techniques {Artificial Intelligence}
 
ProblemSolving(L-2).pdf
ProblemSolving(L-2).pdfProblemSolving(L-2).pdf
ProblemSolving(L-2).pdf
 
02LocalSearch.pdf
02LocalSearch.pdf02LocalSearch.pdf
02LocalSearch.pdf
 

Recently uploaded

(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduitsrknatarajan
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlysanyuktamishra911
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingrakeshbaidya232001
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Call Girls in Nagpur High Profile
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSRajkumarAkumalla
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
 

Recently uploaded (20)

(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduits
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writing
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 

Steps to Solve Problems in Chapter 2

  • 1. Chapter 2 Problem Solving Prepared by Mrs. Megha V Gupta New Horizon Institute of Technology and Management
  • 2. Steps in building a system to solve a particular problem 1. Define the problem precisely – find input situations as well as final situations for an acceptable solution 2. Analyze the problem – find few important features that may have impact on the appropriateness of various possible techniques for solving the problem 3. Isolate and represent task knowledge necessary to solve the problem 4. Choose the best problem-solving technique(s) and apply to the particular problem
  • 3. PROBLEMS, PROBLEM SPACES AND SEARCH Problem solving is a process of generating solutions from observed data. • A ‘problem space’ The set of all possible configurations is the space of the problem state, also known as problem space. The environment where the search is performed is the problem space. ■ A ‘state space’ of the problem is the set of all states reachable from the initial state. • A ‘search’ refers to the search for a solution in a problem space.
  • 4. State Space Search ■ A state space represents a problem in terms of states and operators that change states. ■ A state space consists of: ▪ A representation of the states the system can be in. ▪ A set of operators that can change one state into another state. Often the operators are represented as programs that change a state representation to represent the new state. ▪ An initial state. ▪ A set of final states; some of these may be desirable, others undesirable. This set is often represented implicitly by a program that detects terminal states.
  • 6. 8-puzzle problem “It has set of a 3x3 board having 9 block spaces out of which, 8 blocks are having tiles bearing number from 1 to 8. One space is left blank. The tile adjacent to blank space can move into it. We have to arrange the tiles in a sequence.”
  • 7. The start state is any situation of tiles, and goal state is tiles arranged in a specific sequence. Solution: reporting of “movement of tiles” in order to reach the goal state. The transition function (direction in which blank space effectively moves either towards left or right or up or down) generates the legal state 8-puzzle problem Chapter 2 Problem Solving 7
  • 8. Chapter 2 Problem Solving 8
  • 9. Example Initial state 4 1 3 2 6 7 5 8 1 2 3 4 5 6 7 8 Goal state
  • 10. Water-Jug Problem “You are given two jugs, a 4-gallon one and a 3-gallon one, a pump which has unlimited water which you can use to fill the jug, and the ground on which water may be poured. Neither jug has any measuring markings on it. How can you get exactly 2 gallons of water in the 4-gallon jug?
  • 11. Water jug problem ■ A water jug problem: 4-gallon and 3-gallon - no marker on the bottle - pump to fill the water into the jug - How can you get exactly 2 gallons of water into the 4-gallons jug? 4 3
  • 12. A state space search (x,y) : order pair x : water in 4-gallons x = 0,1,2,3,4 y : water in 3-gallons y = 0,1,2,3 start state : (0,0) goal state : (2,n) where n = any value Rules : 1. Fill the 4 gallon-jug (4,-) 2. Fill the 3 gallon-jug (-,3) 3. Empty the 4 gallon-jug (0,-) 4. Empty the 3 gallon-jug (-,0)
  • 15. A water jug solution 4-Gallon Jug 3-Gallon Jug Rule Applied 0 0 0 3 2 3 0 9 3 3 2 4 2 7 0 2 5 or 12 2 0 9 or 11 Solution : path / plan
  • 16.
  • 17. Solution 3 Chapter 2 Problem Solving 17
  • 18. Missionaries and Cannibals Three missionaries and three cannibals wish to cross the river. They have a small boat that will carry up to two people. Everyone can navigate the boat. If at any time the Cannibals outnumber the Missionaries on either bank of the river, they will eat the Missionaries. Find the smallest number of crossings that will allow everyone to cross the river safely. https://www.youtube.com/watch?v=W9NEWxabGmg
  • 20.
  • 21. Farmer, Wolf, Goat and the Cabbage https://www.youtube.com/watch?v=go294ZR4Rdg
  • 22. State Space Representation Chapter 2 Problem Solving 22
  • 23. Problem-solving agent ■ Problem-solving agent ■ A kind of goal-based agent ■ It solves problem by ■ finding sequences of actions that lead to desirable states (goals) ■ To solve a problem, ■ the first step is the goal formulation, based on the current situation ■ The algorithms are uninformed ■ No extra information about the problem other than the definition ■ No extra information ■ No heuristics (rules)
  • 24. Goal formulation ■ The goal is formulated ■ as a set of world states, in which the goal is satisfied ■ Reaching from initial state -> goal state ■ Actions are required ■ Actions are the operators ■ causing transitions between world states ■ Actions should be abstract enough at a certain degree, instead of very detailed ■ E.g., turn left VS turn left 30 degree, etc.
  • 25. Problem formulation ■ The process of deciding ■ what actions and states to consider, given a goal. ■ E.g., driving Amman -> Zarqa ■ in-between states and actions defined ■ States: Some places in Amman & Zarqa ■ Actions: Turn left, Turn right, go straight, accelerate & brake, etc. ■ Because there are many ways to achieve the same goal ■ Those ways are together expressed as a tree ■ Multiple options of unknown value at a point, ■ the agent can examine different possible sequences of actions, and choose the best ■ This process of looking for the best sequence is called search ■ A search algorithm takes a problem as input and returns a solution(best sequence) in the form of an action sequence.
  • 26. “formulate, search, execute” Once a solution is found, the actions it recommends can be carried out. This EXECUTION is called the execution phase. Thus, we have a simple “formulate, search, execute” design for the agent,
  • 27. Problem-solving agents A problem-solving agent first formulates a goal and a problem, searches for a sequence of actions that would solve the problem, and then executes the actions one at a time. When this is complete, it formulates another goal and starts over.
  • 28. Chapter 2 Problem Solving 28
  • 29. Example: Romania ■ On holiday in Romania; currently in Arad. ■ Flight leaves tomorrow from Bucharest ■ Formulate goal: ■ be in Bucharest ■ Formulate problem: ■ states: various cities ■ actions: drive between cities ■ Find solution: ■ sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
  • 31. Well-defined problems and solutions A problem is defined by 5 components: ■ Initial state ■ Actions ■ Transition model or (Successor functions) ■ Goal Test. ■ Path Cost.
  • 32. Well-defined problems and solutions 1. The initial state that the agent starts in 2. Actions: A description of the possible actions available to the agent. 3. Transition model: description of what each action does. (successor): refer to any state reachable from given state by a single action Together the initial state, actions and transition model define the state space ■ the set of all states reachable from the initial state by any sequence of actions. A path in the state space: ■ a sequence of states connected by a sequence of actions.
  • 33. Well-defined problems and solutions 4. The goal test which determines whether a given state is a goal state ■ Sometimes there is an explicit set of possible goal states, and the test simply checks whether the given state is one of them. ■ Sometimes the goal is described by abstract property rather than explicitly enumerated set of states. Eg. In Chess, the goal is to reach a state called “checkmate,” where the opponent’s king is under attack and can’t escape.
  • 34. Well-defined problems and solutions 5. A path cost function, ■ assigns a numeric cost to each path ■ = performance measure ■ denoted by g ■ to distinguish the best path from others Usually the path cost is the sum of the step costs of the individual actions (in the action list) The solution of a problem is then ■ a path from the initial state to a state satisfying the goal test Optimal solution ■ the solution with lowest path cost among all solutions
  • 35. Vacuum world state space graph ■ states? The state is determined by both the agent location and the dirt locations. ■ Initial state: any ■ actions? Left, Right, Suck ■ Transition model: The actions have their expected effects, except that moving Left in the leftmost square, moving Right in the rightmost square, and Sucking in a clean square have no effect. ■ goal test? no dirt at all locations ■ path cost? Each step costs 1, so the path cost is the number of steps in the path.
  • 36. Example: The 8-puzzle ■ states? locations of tiles ■ Initial state: Any state can be designated as the initial state. ■ actions? move blank left, right, up, down ■ Transition model: Given a state and action, this returns the resulting state; for example, if we apply Left to the start state in Figure above, the resulting state has the 5 and the blank switched. ■ goal test? = goal state (given) ■ path cost? Each step costs 1, so the path cost is the number of steps in the path.
  • 37. Example: robotic assembly ■ states?: real-valued coordinates of robot joint angles parts of the object to be assembled ■ actions?: continuous motions of robot joints ■ goal test?: complete assembly ■ path cost?: time to execute
  • 38. Traveling Salesman Problem(TSP) States: cities ■ Initial state: A ■ Successor function: Travel from one city to another connected by a road ■ Goal test: the trip visits each city only once that starts and ends at A. ■ Path cost: traveling time
  • 39. Using only four colors, you have to color a planar map so that no two adjacent regions have the same color. Initial State: Planar map with no regions colored. Goal Test: All regions of the map are colored and no two adjacent regions have the same color. Successor function: Choose an uncolored region and color it with a color that is different from all adjacent regions. Cost function: Could be 1 for each color used.
  • 40. Airline Travel problems ■ States: Each state obviously includes a location (e.g., an airport) and the current time. Furthermore, because the cost of an action (a flight segment) may depend on previous segments, their fare bases, and their status as domestic or international, the state must record extra information about these “historical” aspects. ■ Initial state: This is specified by the user’s query. ■ Actions: Take any flight from the current location, in any seat class, leaving after the current time, leaving enough time for within-airport transfer if needed. ■ Transition model: The state resulting from taking a flight will have the flight’s destination as the current location and the flight’s arrival time as the current time. ■ Goal test: Are we at the final destination specified by the user? ■ Path cost: This depends on monetary cost, waiting time, flight time, customs and immigration procedures, seat quality, time of day, type of airplane, frequent-flyer mileage awards, and so on.
  • 41. QUIZ ■ A vacuum Cleaner world with two location, two sensors - location and dirt , three actions - left, right and suck will have a state space with how many possible states ? ■ A) 6 ■ B) 8 ■ C) 10 ■ D) 12 41
  • 42. Search tree ■ Initial state ■ The root of the search tree is a search node ■ Expanding ■ applying successor function to the current state thereby generating a new set of states ■ leaf nodes ■ the states having no successors Fringe : Set of search nodes that have not been expanded yet.
  • 44. Search tree ■ The essence of searching ■ in case the first choice is not correct ■ choosing one option and keep others for later inspection ■ Hence we have the search strategy ■ which determines the choice of which state to expand ■ good choice ->fewer work -> faster ■ Important: ■ state space ≠ search tree
  • 45. Search tree ■ A node is having five components: ■ STATE: which state it is in the state space ■ PARENT-NODE: from which node it is generated ■ ACTION: which action applied to its parent-node to generate it ■ PATH-COST: the cost, g(n), from initial state to the node n itself ■ DEPTH: number of steps along the path from the initial state
  • 46. Implementation: states vs. nodes ■ A state is a (representation of) a physical configuration ■ A node is a data structure constituting part of a search tree includes state, parent node, action, path cost g(x), depth ■ The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states.
  • 47. Search strategies ■ A search strategy is defined by picking the order of node expansion ■ Strategies are evaluated along the following dimensions: ■ Completeness (guarantee to find a solution if there is one): does it always find a solution if one exists? ■ time complexity (how long does it take to find a solution): number of nodes generated during the search ■ space complexity (how much memory is needed to perform the search): maximum number of nodes stored in memory ■ Optimality (does it give highest quality solution when there are several different solutions): does it always find a least-cost solution?
  • 48. ■ Time and space complexity are measured in terms of ■ b: branching factor of the search tree (max. no. of successors of any node) ■ d: depth of the least-cost solution (shallowest goal node) ■ m: the maximum length of any path in the state space (maximum depth of the state space) Measuring problem-solving performance Chapter 2 Problem Solving 48
  • 49. Search strategies ■ Uninformed search or blind search ■ no information about the number of steps ■ or the path cost from the current state to the goal ■ is applicable when we only distinguish goal states from non-goal states. ■ search the state space blindly ■ Informed search, or heuristic search ■ a cleverer strategy that searches toward the goal, ■ based on the information from the current state so far ■ is applied if we have some knowledge of the path cost or the number of steps between the current state and a goal.
  • 50. Uninformed search Methods strategies that use only the information available in the problem definition. While searching you have no clue whether one non-goal state is better than any other. Your search is blind. ■ Breadth-first search ■ Uniform cost search ■ Depth-first search ■ Depth-limited search ■ Iterative deepening search ■ Bidirectional search
  • 51. Breadth-first search ■ Expand shallowest unexpanded node Implementation: ■ fringe is a FIFO queue, i.e., new successors go at end of queue Is A a goal state?
  • 52. Breadth-first search Expand: fringe=[C,D,E] Is C a goal state? Expand: fringe=[D,E,F,G] Is D a goal state?
  • 54. Properties of breadth-first search ■ Complete? Yes (if b is finite)if the shallowest goal node is at some finite depth d ■ Time Complexity? b+b2+b3+… +bd + (bd+1 –b) = O(bd+1) ■ Space Complexity? O(bd+1) (keeps every node in memory) ■ Optimal? No, optimal in general (Yes if cost = 1 per step) (if the path cost is a non-decreasing function of depth of the node) Space is the bigger problem (more than time)
  • 55. Breadth First Search Imagine searching a uniform tree where every state has b successors. The root of the search tree generates b nodes at the first level, each of which generates b more nodes, for a total of b2 at the second level. Each of these generates b more nodes, yielding b3 nodes at the third level, and so on. Now suppose that the solution is at depth d. In the worst case, it is the last node generated at that level. Then the total number of nodes generated is b + b2 + b3 + ・ ・ ・ + bd = O(bd) . (If the algorithm were to apply the goal test to nodes when selected for expansion, rather than when generated, the whole layer of nodes at depth d would be expanded before the goal was detected and the time complexity would be O(bd+1).)
  • 56. Breadth-first search S A D B D A E C E E B B F D F B F C E A C G G C G F 14 19 19 17 17 15 15 13 G 25 11 Chapter 2 Problem Solving 56
  • 57. Chapter 2 Problem Solving 57
  • 58. Chapter 2 Problem Solving 58
  • 59. Uniform Cost Search Chapter 2 Problem Solving 59
  • 60. Uniform-cost search Implementation: fringe = queue ordered by path cost Equivalent to breadth-first if all step costs all equal. Complete? Yes, if step cost exceeds some small positive constant Time? # of nodes with path cost less than of optimal solution. Space? # of nodes on paths with path cost less than of optimal solution. Optimal? Yes, for any step cost. Chapter 2 Problem Solving 60
  • 61. Depth-first search ■ Always expands one of the nodes at the deepest level of the tree ■ Only when the search hits a dead end ■ goes back and expands nodes at shallower levels ■ Dead end -> leaf nodes but not the goal ■ Backtracking search ■ only one successor is generated on expansion ■ rather than all successors ■ fewer memory
  • 62. Depth-first search ■ Expand deepest unexpanded node ■ Implementation: ■ fringe = Last In First Out (LIFO) queue, i.e., put successors at front Is A a goal state? queue=[B,C] Is B a goal state?
  • 63. Depth-first search queue=[D,E,C] Is D = goal state? queue=[H,I,E,C] Is H = goal state?
  • 64. Depth-first search queue=[I,E,C] Is I = goal state? queue=[E,C] Is E = goal state?
  • 65. Depth-first search queue=[J,K,C] Is J = goal state? queue=[K,C] Is K = goal state?
  • 66. Depth-first search queue=[C] Is C = goal state? queue=[F,G] Is F = goal state?
  • 67. Depth-first search queue=[L,M,G] Is L = goal state? queue=[M,G] Is M = goal state?
  • 68. Example DFS Chapter 2 Problem Solving 68
  • 69. Depth-first search S A D B D A E C E E B B F D F B F C E A C G G C G F 14 19 19 17 17 15 15 13 G 25 1 1 Chapter 2 Problem Solving 69
  • 70. Properties of depth-first search ■ Complete? No: fails in infinite path or loops ■ complete in finite spaces ■ Time? O(bm) Terrible if m(maximum depth of the state space) can be much larger than d (the depth of the shallowest solution) and is infinite if the tree is unbounded May be much faster than Breadth first search if solutions are dense ■ Space? O(bm), linear space memory requirement is branching factor(b)*maximum depth(m) ■ Optimal? No (It may find a non-optimal goal first) cannot guarantee the shallowest solution.
  • 71. Depth First Search A depth-first tree search may generate all of the O(bm) nodes in the search tree, where m is the maximum depth of any node; this can be much greater than the size of the state space. A depth-first tree search needs to store only a single path from the root to a leaf node, along with the remaining unexpanded sibling nodes for each node on the path. Once a node has been expanded, it can be removed from memory as soon as all its descendants have been fully explored. For a state space with branching factor b and maximum depth m, depth-first search requires storage of only O(bm) nodes.
  • 72.
  • 73. DFS
  • 74. Depth-Limited Search ■ Depth-first search is clearly dangerous • if the tree is very deep, we risk finding a suboptimal solution; • if the tree is infinite, we risk an infinite loop. ■ The embarrassing failure of depth-first search in infinite state spaces can be alleviated by supplying depth-first search with a predetermined depth limit l . That is, nodes at depth are treated as if they have no successors. This approach is called depth-limited search. ■ Three possible outcomes: ■ Solution ■ Failure (no solution) ■ Cutoff (no solution within cutoff)
  • 75. Depth-limited search ■ However, it is usually not easy to define the suitable maximum depth ■ too small ->no solution can be found ■ too large -> the same problems are suffered from ■ Anyway the search is ■ complete ■ but still not optimal
  • 76. Depth-limited search S A D B D A E C E E B B F D F B F C E A C G G C G F 14 19 19 17 17 15 15 13 G 25 11 depth = 3 3 6 Chapter 2 Problem Solving 76
  • 77. Chapter 2 Problem Solving 77
  • 78.
  • 79. Iterative deepening search ■ Usually we do not know a reasonable depth limit in advance. ■ Iterative deepening search repeatedly runs depth-limited search for increasing depth limits 0, 1, 2, . . . ■ this essentially combines the advantages of depth-first and breadth first search; ■ the procedure is complete and optimal; ■ the memory requirement is similar to that of depth-first search;
  • 80. Iterative deepening search The iterative deepening search algorithm, which repeatedly applies depth limited search with increasing limits. It terminates when a solution is found or if the depth limited search returns failure, meaning that no solution exists.
  • 85. ■ Note: We visit top level nodes multiple times. The last (or max depth) level is visited once, second last level is visited twice, and so on. It may seem expensive, but it turns out to be not so costly, since in a tree most of the nodes are in the bottom level. So it does not matter much if the upper levels are visited multiple times. ■ Number of nodes generated in an iterative deepening search to depth d with branching factor b: NIDS = (d+1) b0 +(d) b1 + (d-1)b2 + … + 1bd Chapter 2 Problem Solving 85
  • 86. Iterative deepening search ■ For b = 2, d = 3, ■ NBFS = (b1 )+ (b2 )+ bd +(bd+1 –b)= 2+4+23 + (24-2)=6+8+(14)=28 ■ NIDS = (d+1) b0+(d) b1+ (d-1)b2 + … + 1bd =(4)*1+(3)*2+(2)* 22+(1)* 23 =4+6+8+8=26 ■ iterative deepening is the preferred uninformed search method when the search space is large and the depth of the solution is not known.
  • 87. Properties of iterative deepening search ■ Complete? Yes ■ Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd) ■ Space? O(bd) ■ Optimal? Yes, if step cost = 1
  • 88. Iterative deepening search ■Suppose we have a tree having branching factor ‘b’ (number of children of each node), and its depth ‘d’, i.e., there are bd nodes. ■In an iterative deepening search, the nodes on the bottom level are expanded once, those on the next to bottom level are expanded twice, and so on, up to the root of the search tree, which is expanded d+1 times. ■IDDFS takes the same time as that of DFS and BFS, but it is indeed slower than both as it has a higher constant factor in its time complexity expression. ■IDDFS is best suited for a complete infinite tree
  • 89. Example IDS Chapter 2 Problem Solving 89
  • 90. Bidirectional search ■ Run two simultaneous searches ■ one forward from the initial state another backward from the goal ■ stop when the two searches meet ■ However, computing backward is difficult ■ A huge amount of goal states ■ at the goal state, which actions are used to compute it? ■ can the actions be reversible to computer its predecessors? Chapter 2 Problem Solving 90
  • 91. Chapter 2 Problem Solving 91
  • 93. Informed Search Methods ■ How can we make use of other knowledge about the problem to improve searching strategy? ■ Map example: ■ Heuristic: Expand those nodes closest in “straight-line” distance to goal ■ 8-puzzle: ■ Heuristic: Expand those nodes with the most tiles in place
  • 94. Heuristic ■ Heuristics (Greek heuriskein = find, discover): "the study of the methods and rules of discovery and invention". ■ Heuristic - a “rule of thumb” used to help guide search ■ often, something learned experientially and recalled when needed ■ Heuristic Function - function applied to a state in a search space to indicate a likelihood of success if that state is selected
  • 95. Heuristic function ■A heuristic function at a node n is an estimate of the optimum cost from the current node to a goal. It is denoted by h(n). ■h(n) = estimated cost of the cheapest path from node n to a goal node Example: We want a path from Kolkata to Guwahati Heuristic for Guwahati may be straight-line distance between Kolkata and Guwahati h(Kolkata) = euclidean Distance(Kolkata, Guwahati) Heuristics can also help speed up exhaustive, blind search, such as depth-first and breadth-first search.
  • 96. Informed Search Methods ■ How can we make use of other knowledge about the problem to improve searching strategy? ■ Map example: ■ Heuristic: Expand those nodes closest in “straight-line” distance to goal ■ 8-puzzle: ■ Heuristic: Expand those nodes with the most tiles in place
  • 97. Chapter 2 Problem Solving 97
  • 98. 1 2 3 6 5 7 8 4 Which move is best? rig ht 1 2 3 5 7 8 4 1 2 3 6 5 7 8 4 1 2 3 6 5 7 8 4 6 1 2 3 7 6 5 8 4 GOAL A Simple 8-puzzle heuristic Chapter 2 Problem Solving 98
  • 99. Another approach ■ Number of tiles in the incorrect position. ■ This can also be considered a lower bound on the number of moves from a solution! ■ The “best” move is the one with the lowest number returned by the heuristic. 1 2 3 6 5 7 8 4 rig ht 1 2 3 5 7 8 4 1 2 3 6 5 7 8 4 1 2 3 6 5 7 8 4 6 1 2 3 7 6 5 8 4 GOAL h=2 h=4 h=3
  • 100. heuristics E.g., for the 8-puzzle: ■ h1(n) = number of misplaced tiles ■ h2(n) = total Manhattan distance (i.e., sum of the distances of the tiles from the goal position) ■ h1(S) = 8 ■ h2(S) = 3+1+2+2+2+3+3+2 = 18
  • 101. Best-first search ■ Idea: use an evaluation function f(n) for each node ■ f(n) provides an estimate for the total cost. 🡪 Expand the node n with smallest f(n). ■ Implementation: Order the nodes in fringe increasing order of cost. ■ Special cases: ■ greedy best-first search ■ A* search
  • 102. Best-First Search ■ Use an evaluation function f(n). ■ Always choose the node from fringe that has the lowest f value. 3 5 1 3 5 1 4 6
  • 103. Greedy best-first search ■ f(n) = h(n) estimate of cost from n to goal ■ e.g., f(n) = straight-line distance from n to Bucharest ■ Greedy best-first search expands the node that appears to be closest to goal.
  • 107. Properties of greedy best-first search ■ Complete? No – can get stuck in loops. ■ Time? O(bm), but a good heuristic can give dramatic improvement ■ Space? O(bm) - keeps all nodes in memory ■ Optimal? No e.g. Arad->Sibiu->Rimnicu Vilcea->Pitesti->Bucharest is shorter!
  • 108. E.g. Route finding problem S is the starting state, G is the goal state. Let us run the greedy search algorithm for the graph given in Figure a. The straight line distance heuristic estimates for the nodes are shown in Figure b. Figure a Figure b Chapter 2 Problem Solving 108
  • 109. Chapter 2 Problem Solving 109
  • 110. A* search ■ Hart, Nilsson & Rafael 1968 Best first search with f(n) = g(n) + h(n) ■ Idea: avoid expanding paths that are already expensive ■ Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to goal f(n) = estimated total cost of path through n to goal
  • 111. A* Shortest Path Example Chapter 2 Problem Solving 111
  • 112. A* Shortest Path Example Chapter 2 Problem Solving 112
  • 113. A* Shortest Path Example Chapter 2 Problem Solving 113
  • 114. A* Shortest Path Example Chapter 2 Problem Solving 114
  • 115. A* algorithm Insert the root node into the queue While the queue is not empty Dequeue the element with the highest priority (If priorities are same, alphabetically smaller path is chosen) If the path is ending in the goal state, print the path and exit Else Insert all the children of the dequeued element, with f(n) as the priority
  • 116. Chapter 2 Problem Solving 116
  • 117. Chapter 2 Problem Solving 117
  • 118. Chapter 2 Problem Solving 118
  • 119. Chapter 2 Problem Solving 119
  • 120. A* Example Consider the search problem below with start state S and goal state G. The transition costs are next to the edges, and the heuristic values are next to the states. What is the final cost using A* search. Chapter 2 Problem Solving 120
  • 121. 8 Puzzle Example ■ f(n) = g(n) + h(n) ■ What is the usual g(n)? ■ two well-known h(n)’s ■ h1 = the number of misplaced tiles ■ h2 = the sum of the distances of the tiles from their goal positions, using city block distance, which is the sum of the horizontal and vertical distances
  • 122. Applying A* on 8-puzzle 2 8 3 1 6 4 7 5 1 2 3 8 4 7 6 5 goal 2 8 3 1 4 7 6 5 2 8 3 1 6 4 7 5 2 8 3 1 6 4 7 5 2 3 1 8 4 7 6 5 2 8 3 1 4 7 6 5 2 8 3 1 4 7 6 5 0+4=4 5+1=6 4 6 1st 2nd 5 5 6 Heuristic: No. of misplaced tiles
  • 123. Chapter 2 Problem Solving 123
  • 124. A*: admissibility ■ If search algorithm is admissible, if for any graph it terminates in an optimal path from start state to goal state if path exists. ■ A heuristic function is admissible(terminates with optimal path) if it satisfies the following property: h’(n) ≤ h*(n) (heuristic function underestimates the true cost) ■ h’(n) has to be an optimistic estimator; it never has to overestimate h*(n). ■ h*(n) –cost of the cheapest solution path from n to goal node ■ If h(n) is admissible then search will find optimal solution. {
  • 125. 125 For example, suppose you're trying to drive from Chicago to New York and your heuristic is what your friends think about geography. If your first friend says, "Hey, Boston is close to New York" (underestimating), then you'll waste time looking at routes via Boston. Before long, you'll realise that any sensible route from Chicago to Boston already gets fairly close to New York before reaching Boston and that actually going via Boston just adds more miles. So you'll stop considering routes via Boston and you'll move on to find the optimal route. Your underestimating friend cost you a bit of planning time but, in the end, you found the right route. Suppose that another friend says, "Indiana is a million miles from New York!" Nowhere else on earth is more than 13,000 miles from New York so, if you take your friend's advice literally, you won't even consider any route through Indiana. This makes you drive for nearly twice as long and cover 50%
  • 126. Memory Bounded Heuristic Search: Recursive BFS ■ How can we solve the memory problem for A* search? ■ Idea: Try something like depth first search, but let’s not forget everything about the branches we have partially explored. ■ We remember the best f-value we have found so far in the branch we are deleting.
  • 127. RBFS: RBFS changes its mind very often in practice. This is because the f=g+h become more accurate (less optimistic) as we approach the goal. Hence, higher level nodes have smaller f-values and will be explored first. Problem: We should keep in memory whatever we can. best alternative over fringe nodes, which are not children: i.e. do I want to back up? Chapter 2 Problem Solving 127
  • 128. Recursive best-first ■ It is a recursive implementation of best-first, with linear spatial cost. ■ It forgets a branch when its cost is more than the best alternative. ■ The cost of the forgotten branch is stored in the parent node as its new cost. ■ The forgotten branch is re-expanded if its cost becomes the best once again.
  • 129. Local search algorithms and optimization problems ■ Local search algorithms operate using a single current state and generally move only to neighbors of that state. ■ In addition to finding goals, these algorithms are useful for solving optimization problems in which aim is to find the best state according to an objective function. ■ In LS, there is a function to evaluate the quality of the states, but this is not necessarily related to a cost.
  • 130. Local search and optimization ■ Local search ■ Keep track of single current state ■ Move only to neighboring states ■ Ignore paths ■ Advantages: ■ Use very little memory ■ Can often find reasonable solutions in large or infinite (continuous) state spaces. ■ “Pure optimization” problems ■ All states have an objective function ■ Goal is to find state with max (or min) objective value ■ Does not quite fit into path-cost/goal-state formulation ■ Local search can do quite well on these problems.
  • 131. Local search algorithms ■ These algorithms do not systematically explore all the state space. ■ The heuristic (or evaluation) function is used to reduce the search space (not considering states which are not worth being explored). ■ Algorithms do not usually keep track of the path traveled. The memory cost is minimal.
  • 132. 132
  • 133. Hill Climbing (Greedy Local Search) ■Searching for a goal state = Climbing to the top of a hill ■Heuristic function to estimate how close a given state is to a goal state. ■ Children are considered only if their evaluation function is better than the one of the parent (reduction of the search space).
  • 134. 134
  • 135. 135
  • 136. 136
  • 137. 137
  • 138. Simple Hill Climbing Algorithm 1. Evaluate the initial state. 2. Loop until a solution is found or there are no new operators left to be applied: − Select and apply a new operator − Evaluate the new state: * goal → quit * better than current state → new current state * not better->try new operator
  • 139. 139
  • 140. 140
  • 141. 141
  • 142. 142
  • 143. 143
  • 144. 144
  • 145. Different regions in the State Space • Local maximum: It is a state which is better than its neighboring state however there exists a state which is better than it(global maximum). This state is better because here the value of the objective function is higher than its neighbors. • Global maximum : It is the best possible state in the state space diagram. This because at this state, objective function has highest value. • Plateau/flat local maximum : It is a flat region of state space where neighboring states have the same value. • Ridge : It is region which is higher than its neighbours but itself has a slope. It is a special kind of local maximum. • Current state : The region of state space diagram where we are currently present during the search. • Shoulder : It is a plateau that has an uphill edge. Chapter 2 Problem Solving 145
  • 146. Chapter 2 Problem Solving 146
  • 147. Chapter 2 Problem Solving 147
  • 148. 148 A state that is better than all of its neighbours, but not better than global maximum.
  • 149. 149
  • 150. 150 A flat area of the search space in which all neighboring states have the same value. To get rid of plateau, make a big jump to try to get in a new section.
  • 151. 151 Ridges (result in a sequence of local maxima) The orientation of the high region, compared to the set of available moves, makes it impossible to climb up.
  • 152. 152
  • 153. 153
  • 154. Steepest-Ascent Hill Climbing (Gradient Search) • Standard hill-climbing search algorithm – It is a simple loop which search for and select any operation that improves the current state. • Steepest-ascent hill climbing or gradient search – Is a loop that continuously moves in the direction of increasing value. (Terminates when peak is reached) – The best move (not just any one) that improves the current state is selected. ■ Considers all the moves from the current state. ■ Selects the best one as the next state.
  • 155. Steepest-Ascent Hill Climbing (Gradient Search)
  • 156. 156 Unlike simple hill climbing search, It considers all the successive nodes, compares them, and choose the node which is closest to the solution. Steepest ascent hill climbing search is similar to best-first search because it focuses on each node instead of one.
  • 157. 157
  • 158. 158 Stochastic hill climbing does not focus on all the nodes. It selects one node at random and decides whether it should be expanded or search for a better one.
  • 159. Random restart Hill climbing 159 Random-restart algorithm is based on try and try strategy. It iteratively searches the node and selects the best one at each step until the goal is not found. The success depends most commonly on the shape of the hill. If there are few plateaus, local maxima, and ridges, it becomes easy to reach the destination.
  • 160. Blocks World In this problem, a set of initial arrangement of eight blocks is provided. We have to reach the GOAL arrangement by moving blocks in a systematic order. States are to be evaluated using heuristic , so that we can get next best node by applying Steepest Ascent Hill Climbing technique. Two Heuristics are considered : (i) LOCAL (ii) GLOBAL. Both the function will try to maximize the score/cost of each state.
  • 161. LOCAL Cost/score of goal state is 8 (using local heuristic), because all the blocks are at its correct position.
  • 163. Now J is current new state with score 6 > cost of I (4). So , In step 2 three moves from best state J is possible. Chapter 2 Problem Solving 163
  • 164. All the neighbors of node J have lower score than value of J i.e 4 , so J is a local maxima, and further no move is possible from states K, L and M. So search falls in TRAP situation. To overcome the above problem of Local function, we can apply GLOBAL heuristic. Chapter 2 Problem Solving 164
  • 165. As the value of any structure maximizes, we will be nearer to the goal state. J K L M I Chapter 2 Problem Solving 165
  • 166. Chapter 2 Problem Solving 166 Now goal state will have score /cost of 28 and Initial state will have cost of -28. Again the best node in next move will be that which has maximum score/cost. Further from state M we can have following moves : (i) PUSH block G on block A (ii) PUSH block G on block H (iii) PUSH block H on block A (iv) PUSH block H on block G (v) PUSH block A on block H (vi) PUSH block A on block G BACK. (vii) PUSH block G on TABLE…and so on we select best node till we get structure with score of + 28. GLOBAL APPROACH
  • 167. Simulated Annealing Analogies 167 Metal Annealing Toy Analogy
  • 168. Simulated Annealing • A variation of hill climbing in which, at the beginning of the process, some downhill moves may be made. • Lower the chances of getting caught at a local maximum, or plateau, or a ridge. • It is inspired by the physical process of controlled cooling (crystallization, metal annealing): ■ A metal is heated up to a high temperature and then is progressively cooled in a controlled way until some solid state is reached. ■ If the cooling is adequate, the minimum-energy structure (a global minimum) is obtained. ■ Annealing schedule: if the temperature is lowered sufficiently slowly, then the goal will be attained.
  • 169. Simulated Annealing ■ It is a stochastic hill-climbing algorithm (stochastic local search, SLS): ■ A successor is selected among all possible successors according to a probability distribution. ■ The successor can be worse than the current state. ■ A Physical Analogy: Imagine the task of getting a ping-pong ball into the deepest crevice in a bumpy surface. If we just let the ball roll, it will come to rest at a local minimum. If we shake the surface, we can bounce the ball out of the local minimum. The trick is to shake just hard enough to bounce the ball out of local minima but not hard enough to dislodge it from the global minimum. The simulated-annealing solution is to start by shaking hard (i.e., at a high temperature) and then gradually reduce the intensity of the shaking (i.e., lower the temperature).
  • 170. Simulated annealing • Main idea: Steps taken in random directions do not decrease (but actually increase) the ability of finding a global optimum. • Disadvantage: The structure of the algorithm increases the execution time. • Advantage: The random steps possibly allow to avoid small “hills”. • Temperature: It determines (through a probability function) the amplitude of the steps, long at the beginning, and then shorter and shorter. • Annealing: When the amplitude of the random step is sufficiently small not to allow to descend the hill under consideration, the result of the algorithm is said to be annealed.
  • 171. • If the move improves the situation, it is always accepted. Otherwise, the algorithm accepts the move with some probability less than 1. • The probability decreases exponentially with the “badness” of the move—the amount ΔE by which the evaluation is worsened. • If the schedule lowers T slowly enough, the algorithm will find a global optimum with probability approaching 1. Simulated annealing Chapter 2 Problem Solving 171 https://www.youtube.com/watch?v=NI3WllrvWoc
  • 172. Simulated annealing function SIMULATED-ANNEALING( problem, schedule) return a solution state input: problem, a problem schedule, a mapping from time to temperature local variables: current, a node. next, a node. T, a “temperature” controlling the probability of downward steps current ← MAKE-NODE(INITIAL-STATE[problem]) for t ← 1 to ∞ do T ← schedule[t] if T = 0 then return current next ← a randomly selected successor of current ∆E ← VALUE[next] - VALUE[current] if ∆E > 0 then current ← next else current ← next only with probability e∆E /T t=t+1 Terminology from the physical problem is often used. Downhill moves are accepted readily early in the annealing schedule and then less often as time goes on. The schedule input determines the value of the temperature T as a function of time.
  • 173. Probabilty calculation • The probability also decreases as the “temperature” T goes down: “bad” moves are more likely to be allowed at the start when T is high, and they become more unlikely as T decreases. Chapter 2 Problem Solving 173
  • 174. Simulated annealing • Aim: to avoid local optima, which represent a problem in hill climbing. • Solution: to take, occasionally, steps in a different direction from the one in which the increase (or decrease) of energy is maximum.
  • 175. Simulated annealing: conclusions ■ It is suitable for problems in which the global optimum is surrounded by many local optima. ■ It is suitable for problems in which it is difficult to find a good heuristic function. ■ Determining the values of the parameters can be a problem and requires experimentation.
  • 177. 177
  • 178. 178
  • 179. 179
  • 180. Local beam search In the Stochastic beam search instead of choosing the best k individuals, it selects k number of the individuals at random; the individuals with a better evaluation are more likely to be chosen. This is done by making the probability of being chosen as a function of the evaluation function. Stochastic beam search tends to allow more diversity in the k individuals than does plain beam search.
  • 181. Stochastic beam Search: Genetic Algorithms(GA) Chapter 2 Problem Solving 181
  • 182. Genetic algorithms ■ A genetic algorithm (GA) is a variant of stochastic beam search, in which two parent states are combined. ■ Inspired by the process of natural selection: ■ Living beings adapt to the environment thanks to the characteristics inherited from their parents. ■ The possibility of survival and reproduction are proportional to the goodness of these characteristics. ■ The combination of “good” individuals can produce better adapted individuals.
  • 183. Genetic algorithms ■ To solve a problem via GAs requires: ■ The size of the initial population: ■ GAs start with a set of k states randomly generated ■ A strategy to combine individuals ■ The representation of the states (individuals): ■ A function, which measure the fitness of the states ■ Operators, which combine states to obtain new states ■ Cross-over and mutation operators 183
  • 184. Genetic algorithms: algorithm ■ Steps of the basic GA algorithm: 1. N individuals from current population are selected to form the intermediate population (according to some predefined criteria). 2. Individuals are paired and for each pair: a) The crossover operator is applied and two new individuals are obtained. b) New individuals are mutated ■ The resulting individuals form the new population. ■ The process is iterated until the population converges or a specific number of iteration has passed.
  • 188. 8-Queens Problem Chapter 2 Problem Solving 188
  • 189. Solving 8-queens problem using Genetic algorithms ■ An 8-queens state must specify the positions of 8 queens, each in a column of 8 squares each in the range from 1 to 8. ■ Each state is rated by the evaluation function or the fitness function. ■ A fitness function should return higher values for better states, so, for the 8-queens problem the number of non-attacking pairs of queens is used (8*7/2) =28 for a solution). 2
  • 190. Solving 8-queens problem using Genetic algorithms Chapter 2 Problem Solving 190
  • 191. Representing individuals Chapter 2 Problem Solving 191
  • 192. Generating an initial population Chapter 2 Problem Solving 192
  • 193. Fitness calculation Chapter 2 Problem Solving 193
  • 194. Apply a Fitness function ■ 24/(24+23+20+11) = 31% ■ 23/(24+23+20+11) = 29% etc Chapter 2 Problem Solving 194
  • 196. 196
  • 197. 197
  • 198. 198
  • 199. 199
  • 200. Stochastic Universal sampling Chapter 2 Problem Solving 200
  • 201. Genetic algorithms ■ Fitness function: number of non-attacking pairs of queens (min = 0, max =(8 × 7)/2 = 28) 4 states for 8-queens problem 2 pairs of 2 states randomly selected based on fitness. Random crossover points selected New states after crossover Random mutation applied
  • 202. Genetic algorithms: 8-queens problem ■ The initial population in (a) ■ is ranked by the fitness function in (b), ■ resulting in pairs for mating in (c). ■ They produce offspring in (d), ■ which are subject to mutation in (e).
  • 203. Summary: Genetic Algorithm Chapter 2 Problem Solving 203
  • 204. Genetic algorithms Has the effect of “jumping” to a completely different new part of the search space (quite non-local)
  • 205. Genetic algorithms: application ■ In practice, GAs have had a widespread impact on optimization problems, such as: ■ circuit layout ■ scheduling 205