SlideShare a Scribd company logo
Problem solving by
searching
Chapter Three
Introduction
• In Chapter 2 we introduced the idea of an agent-centred approach
to AI. This approach involves first specifying the environment in
which a rational agent must operate, thereby clearly defining the
type of “intelligent” behavior that is required of the agent. We
have seen that environments come in many different types, based
on the behavior of the environment, and the agent‟s perception of
and interaction with the environment.
• In this chapter we look at how we can take an environment and
formulate a problem for the rational agent to solve. We will see
that the different environment types mentioned in Chapter 2 lead
to different types of problem. To begin with, we will concentrate
on formulating and solving the simplest type of problem, known as
a single-state problem.
Introduction
• The basic algorithm for problem-solving agents consists of 4
phases: formulate the goal and problem , search for a
solution and execute the solution. In solving problems, it is
important to understand the concept of a state space. The
state space of a problem is the set of all possible states that
the environment/agent can be in. A limited set (possibly one)
of these will correspond to the goal of the agent. The aim of
the problem-solving agent is therefore to perform a sequence
of actions that change the environment so that it ends up in
one of the goal states. The search phase of the problem-
solving agent consists of searching the state space for this
sequence of actions.
Introduction
• In this part you will show how an agent can act by
establishing goals and considering sequences of
actions that might achieve those goals.
• A goal and a set of means for achieving the goal is
called a problem, and the process of exploring what
the means can do is called search.
It is a gap between what actually is and what is desired.
• A problem exists when an individual becomes aware of the existence of
an obstacle which makes it difficult to achieve a desired goal or
objective.
• A goal and a set of means for achieving the goal is called a problem,
and the process of exploring what the means can do is called search. A
number of problems are addressed in AI, both:
• Toy problems: are problems that are useful to test and demonstrate
methodologies.
• Can be used by researchers to compare the performance of different
algorithms
• e.g. 8-puzzle, n-queens, vacuum cleaner world, …
• Real-life problems: are problems that have much greater
commercial/economic impact if solved.
• Such problems are more difficult and complex to solve, and there is
no single agreed-upon description
• E.g. Route finding, Traveling sales person, etc.
Solving a problem
Formalize the problem: Identify the collection of information
that the agent will use to decide what to do.
Define states
• States describe distinguishable stages during the problem-
solving process
• Example- What are the various states in route finding
problem?
• The various places including the location of the agent
Define the available operators/rules for getting from one state
to the next
• Operators cause an action that brings transitions from one
state to another by applying on a current state
• Suggest a suitable representation for the problem
space/state space
• Graph, table, list, set, … or a combination of them
• The state space defines the set of all relevant states reachable
from the initial state by (any) sequence of actions through iterative
application of all permutations and combinations of operators
• State space (also called search space/problem space) of the
problem includes the various states :
 Initial state: what state are the environment/agent in to begin
with?
 Actions: the successor function specifies what actions are
possible in each state, and what their result would be. It consists
of a set of action-state pairs.
 Goal test: either an implicit or explicit statement of when the
agent‟s goal has been achieved.
 Path cost: a step cost c(x,a,y) for each action „
a
‟that takes the
agent from state „
x
‟to state „y
‟– the sum of all step costs for a
sequence of actions is the path cost.
Example
Find the state space for route finding problem where the
agent wants to go from sidist_kilo to stadium.
 Think of the states reachable from the initial state until we
reach to the goal state .
Example: Vacuum world problem
To simplify the problem (rather than the full version), let;
• The world has only two locations
Each location may or may not contain dirt
The agent may be in one location or the other
• Eight possible world states .
• Three possible actions (Left, Right, Suck)
• Goal: to clean up all the dirt .
• Path cost: Each step costs 1, so the path cost is the number of
steps in the path
Example: consider the map of Romania
For example, consider the map of Romania in Figure . Let's say that an
agent is in the town of Arad, and has to goal of getting to Bucharest.
What sequence of actions will lead to the agent achieving its goal?
Cont….
If we assume that the environment is fully-observable and
deterministic, then we can formulate this problem as a single-state
problem.
• The environment is fully-observable if the agent knows the map
of Romania and his/her current location.
• It is deterministic if the agent is guaranteed to arrive at the city at
the other end of each road it takes. These are both reasonable
assumptions in this case.
• The single-state problem formulation is therefor :
Cont.…
The 8 puzzle problem
Arrange the tiles so that all the tiles are in the correct
positions. You do this by moving tiles or space. You can
move a tile/space up, down, left, or right, so long as the
following conditions are met:
A)there's no other tile blocking you in the direction of the
movement; and
B)you're not trying to move outside of the
boundaries/edges.
1 2 3
8 4 5
7 6
1 2 3
8 4
7 6 5
The 8 puzzle problem
States:
a state description specifies the location of each of the eight
tiles and blank in one of the nine squares .
Initial State:
Any state in state space
Successor function:
the blank moves Left, Up
Goal test:
current state matches the goal configuration
Path cost:
each step costs 1, so the path cost is just the length of the
path
Missionary-and-cannibal problem:
Three missionaries and three cannibals are on one side
of a river that they wish to cross. There is a boat that can
hold one or two people. Find an action sequence that
brings everyone safely to the opposite bank (i.e. Cross
the river). But you must never leave a group of
missionaries outnumbered by cannibals on the same
bank (in any place).
1. Identify the set of states and operators
2. Show using suitable representation the state space of the
problem
Based on the environment types discussed in Chapter 2, we
can identify a number of common environment types.
These are summarized in Table 1.
Example: Vacuum world problem
To simplify the problem (rather than the full version), let;
The world has only two locations
• Each location may or may not contain dirt
• The agent may be in one location or the other
Eight possible world states
• Three possible actions (Left, Right, Suck)
Suck operator clean the dirt
Left and Right operators move the agent from location to
location
• Goal: to clean up all the dirt
Clean House Task
Vacuum Cleaner state Space
Fully observable: The world is accessible to the agent
• It can determine its exact state through its sensors
• The agent's sensor knows which state it is in.
Deterministic: The agent knows exactly the effect of its actions
• It can then calculate exactly which state it will be in after any sequence
of actions
Action sequence is completely planned.
Example - Vacuum cleaner world
• What will happen if the agent is initially at state = 5 and formulates
action sequence - [Right, Suck]?
• Agent calculates and knows that it will get to a goal state
• Right  {6}
• Suck  {8}
Single state problem
Multiple state problems
Partially observable: The agent has limited access to the world state
• It might not have sensors to get full access to the environment states or as an
extreme, it can have no sensors at all (due to lack of percepts)
Deterministic: The agent knows exactly what each of its actions do
• It can then calculate which state it will be in after any sequence of actions
• If the agent has full knowledge of how its actions change the world, but does
not know of the state of the world, it can still solve the task
Example - Vacuum cleaner world
• Agent's initial state is one of the 8 states: {1,2,3,4,5,6,7,8}
• Action sequence: {right, suck, left, suck}
• Because agent knows what its actions do, it can discover and reach to goal
state.
Right  [2.4.6.8.] Suck  {4,8}
Left  {3,7} Suck  {7}
Partially observable: The agent has limited access to the world state
Non-deterministic: The agent is ignorant of the effect of its actions.
• Sometimes ignorance prevents the agent from finding a guaranteed
solution sequence.
• Suppose the agent is in Murphy’s law world
The agent has to sense during the execution phase, since things
might have changed while it was carrying out an action. This
implies that
• the agent has to compute a tree of actions, rather than a linear
sequence of action
Example - Vacuum cleaner world:
• Action ‘Suck' deposits dirt on the carpet, but only if there is no
dirt already. Depositing dirt rather than sucking returns from
ignorance about the effects of actions
Contingency problems
The agent has no knowledge of the environment
• World Partially observable : No knowledge of states (environment)
• Unknown state space (no map, no sensor)
• Non-deterministic: No knowledge of the effects of its
actions Problem faced by (intelligent) agents (like, new-born
babies)
This is a kind of problem in the real world rather than in a model, which may
involve significant danger for an ignorant agent. If the agent survives, it
learns about the environment
The agent must experiment, learn and build the model of the environment
through its results, gradually, discovering
• What sort of states exist and What its action do
• Then it can use these to solve subsequent (future) problems
• Example: in solving Vacuum cleaner world problem the agent learns the
state space and effects of its action sequences say: [suck, Right]
Exploration problems
To define a problem, we need the following elements: states,
operators, goal test function and cost function.
Well-defined problem and solutions
 Goal formulation
Is a step that specifies exactly what the agent is trying to achieve.
This step narrows down the scope that the agent has to look at.
 Problem formulation
Is a step that puts down the actions and states that the agent has to
consider given a goal (avoiding any redundant states), like:
• the initial state
• the allowable actions etc…
 Search
Is the process of looking for the various sequence of actions that
lead to a goal state, evaluating them and choosing the optimal
sequence.
 Execute
Is the final step that the agent executes the chosen sequence of
actions to get it to the solution/goal
AI Group Assignment 30%
Write a about Introduction to Robot(Robotics).
Contents: introduction, types of robots, robots hardware….
Due date: 21/01/2015 EC group assignmet
Tree Searching
Actually we can solve this state space search problem by using a tree
search algorithm. For example, the tree shown in Figure 3 illustrates
the start of the tree search process: at each iteration we select a
node. If the node represents a goal state we stop searching.
Otherwise we “expand” the selected node (i.e. generate its possible
successors using the successor function) and add the successors as
child nodes of the selected node. This process continues until either
we find a goal state, or there are no nodes left to expand, in which
case the search has failed .
Tree Searching
Search Terminology
Problem Space − It is the environment in which the search takes place. (A set
of states and set of operators to change those states)
Problem Instance − It is Initial state + Goal state.
Problem Space Graph − It represents problem state. States are shown by
nodes and operators are shown by edges.
Depth of a problem − Length of a shortest path or shortest sequence of
operators from Initial State to goal state.
Admissibility − A property of an algorithm to always find an optimal solution.
Branching Factor − The average number of child nodes in the problem space
graph.
Depth − Length of the shortest path from initial state to goal state.
Tree Searching
• In the following sections we will examine a number of tree
search strategies. For each strategy, we will assess it against a
number of criteria:
• Completeness: does the algorithm always find a solution if there
is one?
• Optimality: does the algorithm always find the least-cost
solution?
• Space Complexity − The maximum number of nodes that are
stored in memory.
• Time Complexity − The maximum number of nodes that are
created.
Uninformed Searching
• The simplest type of tree search algorithm is called uninformed,
or blind, tree search. These algorithms do not use any additional
information about states apart from that provided in the problem
definition .
• These are an inefficient .
Breadth-First Search
• One of the simplest uninformed tree search strategies is breadth-
first search.
• Breadth-first search can be implemented by using a FIFO (First-In
First-Out) queue for the list of unexpanded nodes.
• In breadth first search we always select the minimum depth node for
expansion. This has the effect that we “explore” the tree by moving
across the breadth of the tree, completely exploring every level
before moving down to the next level. Figure 4 illustrates the order of
node expansion using breadth-first search on a sample tree with
branching factor 3.
Example of Breadth-First Search
Summary of breadth-first search analysis
 Complete: Yes (assuming b is finite)
 Time Complexity: O( bd )
 Space complexity: O( bd )
 Optimal: Yes, if Step cost = 1 (i.e. no cost/all step costs are same)
where
b – maximum branching factor in a tree.
d – depth of the shallowest (depth of the goal node).
Uniform Cost Search
Uniform cost search is similar to breadth-first search except that
it tries to overcome the limitation of not being optimal when
step costs are not identical. Instead of always expanding the
minimum depth node, uniform cost search always expands the
node with the least path cost. In other words, uniform cost search
always expands the node which is “closest” to the initial state,
and therefore has the greatest potential for leading to a least-
cost solution.
If all step costs are identical, uniform cost search is equivalent to
breadth-first search .
Cont..
Summary of uniform cost search
where
b – maximum branching factor in a tree.
ϵ – cost of each step
c – optimal cost
 Complete: Yes (if b is finite and costs are stepped costs are zero)
 Time Complexity: O(b(c/ϵ))
 Space complexity: O(b(c/ϵ))
 Optimal: Yes (b/c it chooses lowest cost)
Depth-First Search
• Depth-first search can be implemented by using a LIFO (Last-In
First-Out) stack for the list of unexpanded node
• Depth-first search is an alternative tree search algorithm that
has linear space complexity.
• With depth-first search we always choose the deepest node for
expansion. For example, Figure 5 illustrates the order of node
expansion for the same simple tree we saw in Figure 4.
Example of Depth-First Search
 Complete: NO
 Time Complexity: O(bm)
 Space complexity: O(bm)
 Optimal: YES
Where
b – maximum branching factor in a tree.
m – max depth of any node tree.
Depth-limited search
 Depth-first search has much better space complexity than breadth-first or
uniform cost, but it is not complete for infinite depth trees and it is not optimal.
Depth-limited search attempts to overcome the first of these weaknesses. The
idea behind depth-limited search is to run a depth-first search but place a limit (a
“cut-off”) on the maximum depth to search to. For example, if our cut-off depth
is l, then we will never expand any nodes at level l.
 Depth-first search does indeed handle infinite depth trees better, but it introduces
some new weaknesses. If the goal state is below the cut-off level l it will not be
found. Also if the goal is above level l depth-limited search cannot be guaranteed
to find the least-cost solution, so it is not optimal.
Summary of depth-limited search
 Complete: Complete (if solution > depth-limit)
 Time Complexity: O(bl)
 Space complexity: O(bl)
 Optimal: Yes (only if l > d)
Where
b – maximum branching factor in a tree.
l – depth-limit
Informed Search
• Informed search algorithms attempt to use extra domain
knowledge to inform the search, in an attempt to reduce
search time.
• A particular class of informed search algorithms is known as
best-first search. Note that best-first search is not an
algorithm itself, but a general approach. In best-first search,
we use a heuristic function to estimate which of the nodes in
the fringe is the “best” node for expansion. This heuristic
function, h(n), estimates the cost of the cheapest path from
node n to the goal state. In other words, it tells us which of the
nodes in the fringe it think is “closest” to the goal.
• We will now examine two similar, but not identical, best-first
search algorithms: greedy best-first and A* search.
Greedy Best-First Search
 The simplest best-first search algorithm is greedy best-first search.
 This algorithm simply expands the node that is estimated to be closest to the
goal, i.e. the one with the lowest value of the heuristic function h(n) .
 For example, let us return to the Romania example we introduced in the
previous chapter (the state space is reproduced in Figure 1 for ease of
reference). What information can we use to estimate the actual road distance
from a city to Bucharest? In other words, what domain knowledge can we use
to estimate which of the unexpanded nodes is closest to Bucharest? One
possible answer is to use the straight-line distance from each city to Bucharest.
Table 1 shows a list of all these distances .
Greedy Best-First Search
Greedy Best-First Search
Using this information, the greedy best-first search algorithm will select
for expansion the node from the unexpanded fringe list with the lowest
value of HSLD(n) .
Summary of Greedy Best-First Search
Completeness: no, can get stuck in loops
Optimality: no, can go for non-optimal solutions that look good in
the short term
 Time complexity: O(bm), but good heuristic can make dramatic
improvement
 Space complexity: same as time complexity
Where
b – maximum branching factor in a tree.
m – max depth of the node in a tree.
A* Search algorithm
A* search is similar to greedy best-first search, except that
it also takes into account the actual path cost taken so far to
reach each node.
The node with the lowest estimated total path cost, f(n), is
expanded, f(n) = g(n) + h(n)
Where
g(n) = total actual path cost to get to node n
h(n) = estimated path cost to get from node n to goal .
Cont.….
Red colored numbers indicates heuristic value for each node and blue colored numbers indicate
path cost from one node to the next node. And our initial sate is node A and goal state is node G.
Conti…
Execution of A* search is given below for the map Ramanian . Step1:
 Fringe=[Arad]
 Lowest value of evaluation function f(Arad)=0+366=366
 Action: expand Arad
Step 2:
 Fringe=[Sibiu,Timisoara,Zerind]
 Lowest value of evaluation function f(Sibiu)=140+253=393
 Action: expand Sibiu
Step 3:
 Fringe=[Timisoara,Zerind,Arad,Fagaras,Oradea,Rimnicu Vilcea]
 Lowest value of evaluation function f(Rimnicu
Vilcea)=220+193=413
 Action: expand Rimnicu Vilcea
Cont.…
Step 4:
Fringe=[Timisoara,Zerind,Arad,Fagaras,Oradea,Craiova,Pitesti,S
ibiu]
 Lowest value of evaluation function f(Fagaras)=239+176=415
Action: expand Fagaras
• Step 5: o
Fringe=[Timisoara,Zerind,Arad,Oradea,Craiova,Pitesti,Sibiu,Sibiu
,Bucharest].
• o Lowest value of evaluation function f(Pitesti)=317+100=417 o
Action: expand Pitesti
Cont.….
Step 6:
 Fringe=[Timisoara,Zerind,Arad,Oradea,Craiova,Sibiu,Sibiu,Buc harest,
Bucharest,Craiova,Rimnicu Vilcea]
 Lowest value of evaluation function f(Bucharest)=418+0=418
 Action: find goal at Bucharest .
Notice that A* search finds a different (and optimal) solution to greedy
best-first search, getting to Bucharest via Sibiu, Rimnicu Vilcea and Pitesti,
rather than via Sibiu and Fagaras .
Summary of A* Search
 Completeness: YES
 Optimality: YES
 Time complexity: O(bm), but good heuristic can make
dramatic improvement
 Space complexity: same as time complexity
Where
b – maximum branching factor in a tree.
m – depth of the least-cost tree.
Quiz 10%
1. Which searching algorithm do you think that it is better and
why?
2. What is the difference b/n blind and heuristics searching
algorithms?
3. List and explain the criteria's that can asses or examine the
searching algorithm performance.

More Related Content

Similar to chapterThree.pptx

Problem Solving Techniques
Problem Solving TechniquesProblem Solving Techniques
Problem Solving Techniques
Sagacious IT Solution
 
Week 4.pdf
Week 4.pdfWeek 4.pdf
Week 4.pdf
ZamshedForman1
 
Problem solving agents
Problem solving agentsProblem solving agents
Problem solving agents
Megha Sharma
 
Week 3.pdf
Week 3.pdfWeek 3.pdf
Week 3.pdf
ZamshedForman1
 
Search-Beyond-Classical-no-exercise-answers.pdf
Search-Beyond-Classical-no-exercise-answers.pdfSearch-Beyond-Classical-no-exercise-answers.pdf
Search-Beyond-Classical-no-exercise-answers.pdf
MrRRThirrunavukkaras
 
AI_03_Solving Problems by Searching.pptx
AI_03_Solving Problems by Searching.pptxAI_03_Solving Problems by Searching.pptx
AI_03_Solving Problems by Searching.pptx
Yousef Aburawi
 
artificial intelligence document final.pptx
artificial intelligence document final.pptxartificial intelligence document final.pptx
artificial intelligence document final.pptx
thahaxaina025
 
2.Problems Problem Spaces and Search.ppt
2.Problems Problem Spaces and Search.ppt2.Problems Problem Spaces and Search.ppt
2.Problems Problem Spaces and Search.ppt
Dr. Naushad Varish
 
problemsolving with AI.pptx
problemsolving with AI.pptxproblemsolving with AI.pptx
problemsolving with AI.pptx
PriyadharshiniG41
 
Problem space
Problem spaceProblem space
Problem space
harman_sekhon
 
Problem space
Problem spaceProblem space
Problem space
harman_sekhon
 
Problem space
Problem spaceProblem space
Problem space
harman_sekhon
 
AI-03 Problems State Space.pptx
AI-03 Problems State Space.pptxAI-03 Problems State Space.pptx
AI-03 Problems State Space.pptx
Pankaj Debbarma
 
Amit ppt
Amit pptAmit ppt
Amit ppt
amitp26
 
Lesson 22
Lesson 22Lesson 22
Lesson 22
Avijit Kumar
 
AI Lesson 22
AI Lesson 22AI Lesson 22
AI Lesson 22
Assistant Professor
 
Lecture 07 search techniques
Lecture 07 search techniquesLecture 07 search techniques
Lecture 07 search techniques
Hema Kashyap
 
Week 2.pdf
Week 2.pdfWeek 2.pdf
Week 2.pdf
ZamshedForman1
 
Chapter2final 130103081315-phpapp02
Chapter2final 130103081315-phpapp02Chapter2final 130103081315-phpapp02
Chapter2final 130103081315-phpapp02
Madhan Kumar
 
Planning
Planning Planning
Planning
Amar Jukuntla
 

Similar to chapterThree.pptx (20)

Problem Solving Techniques
Problem Solving TechniquesProblem Solving Techniques
Problem Solving Techniques
 
Week 4.pdf
Week 4.pdfWeek 4.pdf
Week 4.pdf
 
Problem solving agents
Problem solving agentsProblem solving agents
Problem solving agents
 
Week 3.pdf
Week 3.pdfWeek 3.pdf
Week 3.pdf
 
Search-Beyond-Classical-no-exercise-answers.pdf
Search-Beyond-Classical-no-exercise-answers.pdfSearch-Beyond-Classical-no-exercise-answers.pdf
Search-Beyond-Classical-no-exercise-answers.pdf
 
AI_03_Solving Problems by Searching.pptx
AI_03_Solving Problems by Searching.pptxAI_03_Solving Problems by Searching.pptx
AI_03_Solving Problems by Searching.pptx
 
artificial intelligence document final.pptx
artificial intelligence document final.pptxartificial intelligence document final.pptx
artificial intelligence document final.pptx
 
2.Problems Problem Spaces and Search.ppt
2.Problems Problem Spaces and Search.ppt2.Problems Problem Spaces and Search.ppt
2.Problems Problem Spaces and Search.ppt
 
problemsolving with AI.pptx
problemsolving with AI.pptxproblemsolving with AI.pptx
problemsolving with AI.pptx
 
Problem space
Problem spaceProblem space
Problem space
 
Problem space
Problem spaceProblem space
Problem space
 
Problem space
Problem spaceProblem space
Problem space
 
AI-03 Problems State Space.pptx
AI-03 Problems State Space.pptxAI-03 Problems State Space.pptx
AI-03 Problems State Space.pptx
 
Amit ppt
Amit pptAmit ppt
Amit ppt
 
Lesson 22
Lesson 22Lesson 22
Lesson 22
 
AI Lesson 22
AI Lesson 22AI Lesson 22
AI Lesson 22
 
Lecture 07 search techniques
Lecture 07 search techniquesLecture 07 search techniques
Lecture 07 search techniques
 
Week 2.pdf
Week 2.pdfWeek 2.pdf
Week 2.pdf
 
Chapter2final 130103081315-phpapp02
Chapter2final 130103081315-phpapp02Chapter2final 130103081315-phpapp02
Chapter2final 130103081315-phpapp02
 
Planning
Planning Planning
Planning
 

Recently uploaded

REASIGNACION 2024 UGEL CHUPACA 2024 UGEL CHUPACA.pdf
REASIGNACION 2024 UGEL CHUPACA 2024 UGEL CHUPACA.pdfREASIGNACION 2024 UGEL CHUPACA 2024 UGEL CHUPACA.pdf
REASIGNACION 2024 UGEL CHUPACA 2024 UGEL CHUPACA.pdf
giancarloi8888
 
BPSC-105 important questions for june term end exam
BPSC-105 important questions for june term end examBPSC-105 important questions for june term end exam
BPSC-105 important questions for june term end exam
sonukumargpnirsadhan
 
Bonku-Babus-Friend by Sathyajith Ray (9)
Bonku-Babus-Friend by Sathyajith Ray  (9)Bonku-Babus-Friend by Sathyajith Ray  (9)
Bonku-Babus-Friend by Sathyajith Ray (9)
nitinpv4ai
 
Bossa N’ Roll Records by Ismael Vazquez.
Bossa N’ Roll Records by Ismael Vazquez.Bossa N’ Roll Records by Ismael Vazquez.
Bossa N’ Roll Records by Ismael Vazquez.
IsmaelVazquez38
 
Standardized tool for Intelligence test.
Standardized tool for Intelligence test.Standardized tool for Intelligence test.
Standardized tool for Intelligence test.
deepaannamalai16
 
HYPERTENSION - SLIDE SHARE PRESENTATION.
HYPERTENSION - SLIDE SHARE PRESENTATION.HYPERTENSION - SLIDE SHARE PRESENTATION.
HYPERTENSION - SLIDE SHARE PRESENTATION.
deepaannamalai16
 
The basics of sentences session 7pptx.pptx
The basics of sentences session 7pptx.pptxThe basics of sentences session 7pptx.pptx
The basics of sentences session 7pptx.pptx
heathfieldcps1
 
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...
EduSkills OECD
 
Pharmaceutics Pharmaceuticals best of brub
Pharmaceutics Pharmaceuticals best of brubPharmaceutics Pharmaceuticals best of brub
Pharmaceutics Pharmaceuticals best of brub
danielkiash986
 
CIS 4200-02 Group 1 Final Project Report (1).pdf
CIS 4200-02 Group 1 Final Project Report (1).pdfCIS 4200-02 Group 1 Final Project Report (1).pdf
CIS 4200-02 Group 1 Final Project Report (1).pdf
blueshagoo1
 
RESULTS OF THE EVALUATION QUESTIONNAIRE.pptx
RESULTS OF THE EVALUATION QUESTIONNAIRE.pptxRESULTS OF THE EVALUATION QUESTIONNAIRE.pptx
RESULTS OF THE EVALUATION QUESTIONNAIRE.pptx
zuzanka
 
Data Structure using C by Dr. K Adisesha .ppsx
Data Structure using C by Dr. K Adisesha .ppsxData Structure using C by Dr. K Adisesha .ppsx
Data Structure using C by Dr. K Adisesha .ppsx
Prof. Dr. K. Adisesha
 
Temple of Asclepius in Thrace. Excavation results
Temple of Asclepius in Thrace. Excavation resultsTemple of Asclepius in Thrace. Excavation results
Temple of Asclepius in Thrace. Excavation results
Krassimira Luka
 
How to Setup Default Value for a Field in Odoo 17
How to Setup Default Value for a Field in Odoo 17How to Setup Default Value for a Field in Odoo 17
How to Setup Default Value for a Field in Odoo 17
Celine George
 
Oliver Asks for More by Charles Dickens (9)
Oliver Asks for More by Charles Dickens (9)Oliver Asks for More by Charles Dickens (9)
Oliver Asks for More by Charles Dickens (9)
nitinpv4ai
 
KHUSWANT SINGH.pptx ALL YOU NEED TO KNOW ABOUT KHUSHWANT SINGH
KHUSWANT SINGH.pptx ALL YOU NEED TO KNOW ABOUT KHUSHWANT SINGHKHUSWANT SINGH.pptx ALL YOU NEED TO KNOW ABOUT KHUSHWANT SINGH
KHUSWANT SINGH.pptx ALL YOU NEED TO KNOW ABOUT KHUSHWANT SINGH
shreyassri1208
 
spot a liar (Haiqa 146).pptx Technical writhing and presentation skills
spot a liar (Haiqa 146).pptx Technical writhing and presentation skillsspot a liar (Haiqa 146).pptx Technical writhing and presentation skills
spot a liar (Haiqa 146).pptx Technical writhing and presentation skills
haiqairshad
 
Geography as a Discipline Chapter 1 __ Class 11 Geography NCERT _ Class Notes...
Geography as a Discipline Chapter 1 __ Class 11 Geography NCERT _ Class Notes...Geography as a Discipline Chapter 1 __ Class 11 Geography NCERT _ Class Notes...
Geography as a Discipline Chapter 1 __ Class 11 Geography NCERT _ Class Notes...
ImMuslim
 
BIOLOGY NATIONAL EXAMINATION COUNCIL (NECO) 2024 PRACTICAL MANUAL.pptx
BIOLOGY NATIONAL EXAMINATION COUNCIL (NECO) 2024 PRACTICAL MANUAL.pptxBIOLOGY NATIONAL EXAMINATION COUNCIL (NECO) 2024 PRACTICAL MANUAL.pptx
BIOLOGY NATIONAL EXAMINATION COUNCIL (NECO) 2024 PRACTICAL MANUAL.pptx
RidwanHassanYusuf
 
Observational Learning
Observational Learning Observational Learning
Observational Learning
sanamushtaq922
 

Recently uploaded (20)

REASIGNACION 2024 UGEL CHUPACA 2024 UGEL CHUPACA.pdf
REASIGNACION 2024 UGEL CHUPACA 2024 UGEL CHUPACA.pdfREASIGNACION 2024 UGEL CHUPACA 2024 UGEL CHUPACA.pdf
REASIGNACION 2024 UGEL CHUPACA 2024 UGEL CHUPACA.pdf
 
BPSC-105 important questions for june term end exam
BPSC-105 important questions for june term end examBPSC-105 important questions for june term end exam
BPSC-105 important questions for june term end exam
 
Bonku-Babus-Friend by Sathyajith Ray (9)
Bonku-Babus-Friend by Sathyajith Ray  (9)Bonku-Babus-Friend by Sathyajith Ray  (9)
Bonku-Babus-Friend by Sathyajith Ray (9)
 
Bossa N’ Roll Records by Ismael Vazquez.
Bossa N’ Roll Records by Ismael Vazquez.Bossa N’ Roll Records by Ismael Vazquez.
Bossa N’ Roll Records by Ismael Vazquez.
 
Standardized tool for Intelligence test.
Standardized tool for Intelligence test.Standardized tool for Intelligence test.
Standardized tool for Intelligence test.
 
HYPERTENSION - SLIDE SHARE PRESENTATION.
HYPERTENSION - SLIDE SHARE PRESENTATION.HYPERTENSION - SLIDE SHARE PRESENTATION.
HYPERTENSION - SLIDE SHARE PRESENTATION.
 
The basics of sentences session 7pptx.pptx
The basics of sentences session 7pptx.pptxThe basics of sentences session 7pptx.pptx
The basics of sentences session 7pptx.pptx
 
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...
 
Pharmaceutics Pharmaceuticals best of brub
Pharmaceutics Pharmaceuticals best of brubPharmaceutics Pharmaceuticals best of brub
Pharmaceutics Pharmaceuticals best of brub
 
CIS 4200-02 Group 1 Final Project Report (1).pdf
CIS 4200-02 Group 1 Final Project Report (1).pdfCIS 4200-02 Group 1 Final Project Report (1).pdf
CIS 4200-02 Group 1 Final Project Report (1).pdf
 
RESULTS OF THE EVALUATION QUESTIONNAIRE.pptx
RESULTS OF THE EVALUATION QUESTIONNAIRE.pptxRESULTS OF THE EVALUATION QUESTIONNAIRE.pptx
RESULTS OF THE EVALUATION QUESTIONNAIRE.pptx
 
Data Structure using C by Dr. K Adisesha .ppsx
Data Structure using C by Dr. K Adisesha .ppsxData Structure using C by Dr. K Adisesha .ppsx
Data Structure using C by Dr. K Adisesha .ppsx
 
Temple of Asclepius in Thrace. Excavation results
Temple of Asclepius in Thrace. Excavation resultsTemple of Asclepius in Thrace. Excavation results
Temple of Asclepius in Thrace. Excavation results
 
How to Setup Default Value for a Field in Odoo 17
How to Setup Default Value for a Field in Odoo 17How to Setup Default Value for a Field in Odoo 17
How to Setup Default Value for a Field in Odoo 17
 
Oliver Asks for More by Charles Dickens (9)
Oliver Asks for More by Charles Dickens (9)Oliver Asks for More by Charles Dickens (9)
Oliver Asks for More by Charles Dickens (9)
 
KHUSWANT SINGH.pptx ALL YOU NEED TO KNOW ABOUT KHUSHWANT SINGH
KHUSWANT SINGH.pptx ALL YOU NEED TO KNOW ABOUT KHUSHWANT SINGHKHUSWANT SINGH.pptx ALL YOU NEED TO KNOW ABOUT KHUSHWANT SINGH
KHUSWANT SINGH.pptx ALL YOU NEED TO KNOW ABOUT KHUSHWANT SINGH
 
spot a liar (Haiqa 146).pptx Technical writhing and presentation skills
spot a liar (Haiqa 146).pptx Technical writhing and presentation skillsspot a liar (Haiqa 146).pptx Technical writhing and presentation skills
spot a liar (Haiqa 146).pptx Technical writhing and presentation skills
 
Geography as a Discipline Chapter 1 __ Class 11 Geography NCERT _ Class Notes...
Geography as a Discipline Chapter 1 __ Class 11 Geography NCERT _ Class Notes...Geography as a Discipline Chapter 1 __ Class 11 Geography NCERT _ Class Notes...
Geography as a Discipline Chapter 1 __ Class 11 Geography NCERT _ Class Notes...
 
BIOLOGY NATIONAL EXAMINATION COUNCIL (NECO) 2024 PRACTICAL MANUAL.pptx
BIOLOGY NATIONAL EXAMINATION COUNCIL (NECO) 2024 PRACTICAL MANUAL.pptxBIOLOGY NATIONAL EXAMINATION COUNCIL (NECO) 2024 PRACTICAL MANUAL.pptx
BIOLOGY NATIONAL EXAMINATION COUNCIL (NECO) 2024 PRACTICAL MANUAL.pptx
 
Observational Learning
Observational Learning Observational Learning
Observational Learning
 

chapterThree.pptx

  • 2. Introduction • In Chapter 2 we introduced the idea of an agent-centred approach to AI. This approach involves first specifying the environment in which a rational agent must operate, thereby clearly defining the type of “intelligent” behavior that is required of the agent. We have seen that environments come in many different types, based on the behavior of the environment, and the agent‟s perception of and interaction with the environment. • In this chapter we look at how we can take an environment and formulate a problem for the rational agent to solve. We will see that the different environment types mentioned in Chapter 2 lead to different types of problem. To begin with, we will concentrate on formulating and solving the simplest type of problem, known as a single-state problem.
  • 3. Introduction • The basic algorithm for problem-solving agents consists of 4 phases: formulate the goal and problem , search for a solution and execute the solution. In solving problems, it is important to understand the concept of a state space. The state space of a problem is the set of all possible states that the environment/agent can be in. A limited set (possibly one) of these will correspond to the goal of the agent. The aim of the problem-solving agent is therefore to perform a sequence of actions that change the environment so that it ends up in one of the goal states. The search phase of the problem- solving agent consists of searching the state space for this sequence of actions.
  • 4. Introduction • In this part you will show how an agent can act by establishing goals and considering sequences of actions that might achieve those goals. • A goal and a set of means for achieving the goal is called a problem, and the process of exploring what the means can do is called search.
  • 5. It is a gap between what actually is and what is desired. • A problem exists when an individual becomes aware of the existence of an obstacle which makes it difficult to achieve a desired goal or objective. • A goal and a set of means for achieving the goal is called a problem, and the process of exploring what the means can do is called search. A number of problems are addressed in AI, both: • Toy problems: are problems that are useful to test and demonstrate methodologies. • Can be used by researchers to compare the performance of different algorithms • e.g. 8-puzzle, n-queens, vacuum cleaner world, … • Real-life problems: are problems that have much greater commercial/economic impact if solved. • Such problems are more difficult and complex to solve, and there is no single agreed-upon description • E.g. Route finding, Traveling sales person, etc.
  • 6. Solving a problem Formalize the problem: Identify the collection of information that the agent will use to decide what to do. Define states • States describe distinguishable stages during the problem- solving process • Example- What are the various states in route finding problem? • The various places including the location of the agent Define the available operators/rules for getting from one state to the next • Operators cause an action that brings transitions from one state to another by applying on a current state • Suggest a suitable representation for the problem space/state space • Graph, table, list, set, … or a combination of them
  • 7. • The state space defines the set of all relevant states reachable from the initial state by (any) sequence of actions through iterative application of all permutations and combinations of operators • State space (also called search space/problem space) of the problem includes the various states :  Initial state: what state are the environment/agent in to begin with?  Actions: the successor function specifies what actions are possible in each state, and what their result would be. It consists of a set of action-state pairs.  Goal test: either an implicit or explicit statement of when the agent‟s goal has been achieved.  Path cost: a step cost c(x,a,y) for each action „ a ‟that takes the agent from state „ x ‟to state „y ‟– the sum of all step costs for a sequence of actions is the path cost.
  • 8. Example Find the state space for route finding problem where the agent wants to go from sidist_kilo to stadium.  Think of the states reachable from the initial state until we reach to the goal state .
  • 9. Example: Vacuum world problem To simplify the problem (rather than the full version), let; • The world has only two locations Each location may or may not contain dirt The agent may be in one location or the other • Eight possible world states . • Three possible actions (Left, Right, Suck) • Goal: to clean up all the dirt . • Path cost: Each step costs 1, so the path cost is the number of steps in the path
  • 10. Example: consider the map of Romania For example, consider the map of Romania in Figure . Let's say that an agent is in the town of Arad, and has to goal of getting to Bucharest. What sequence of actions will lead to the agent achieving its goal?
  • 11. Cont…. If we assume that the environment is fully-observable and deterministic, then we can formulate this problem as a single-state problem. • The environment is fully-observable if the agent knows the map of Romania and his/her current location. • It is deterministic if the agent is guaranteed to arrive at the city at the other end of each road it takes. These are both reasonable assumptions in this case. • The single-state problem formulation is therefor :
  • 13. The 8 puzzle problem Arrange the tiles so that all the tiles are in the correct positions. You do this by moving tiles or space. You can move a tile/space up, down, left, or right, so long as the following conditions are met: A)there's no other tile blocking you in the direction of the movement; and B)you're not trying to move outside of the boundaries/edges. 1 2 3 8 4 5 7 6 1 2 3 8 4 7 6 5
  • 14. The 8 puzzle problem States: a state description specifies the location of each of the eight tiles and blank in one of the nine squares . Initial State: Any state in state space Successor function: the blank moves Left, Up Goal test: current state matches the goal configuration Path cost: each step costs 1, so the path cost is just the length of the path
  • 15. Missionary-and-cannibal problem: Three missionaries and three cannibals are on one side of a river that they wish to cross. There is a boat that can hold one or two people. Find an action sequence that brings everyone safely to the opposite bank (i.e. Cross the river). But you must never leave a group of missionaries outnumbered by cannibals on the same bank (in any place). 1. Identify the set of states and operators 2. Show using suitable representation the state space of the problem
  • 16. Based on the environment types discussed in Chapter 2, we can identify a number of common environment types. These are summarized in Table 1.
  • 17. Example: Vacuum world problem To simplify the problem (rather than the full version), let; The world has only two locations • Each location may or may not contain dirt • The agent may be in one location or the other Eight possible world states • Three possible actions (Left, Right, Suck) Suck operator clean the dirt Left and Right operators move the agent from location to location • Goal: to clean up all the dirt
  • 20. Fully observable: The world is accessible to the agent • It can determine its exact state through its sensors • The agent's sensor knows which state it is in. Deterministic: The agent knows exactly the effect of its actions • It can then calculate exactly which state it will be in after any sequence of actions Action sequence is completely planned. Example - Vacuum cleaner world • What will happen if the agent is initially at state = 5 and formulates action sequence - [Right, Suck]? • Agent calculates and knows that it will get to a goal state • Right  {6} • Suck  {8} Single state problem
  • 21. Multiple state problems Partially observable: The agent has limited access to the world state • It might not have sensors to get full access to the environment states or as an extreme, it can have no sensors at all (due to lack of percepts) Deterministic: The agent knows exactly what each of its actions do • It can then calculate which state it will be in after any sequence of actions • If the agent has full knowledge of how its actions change the world, but does not know of the state of the world, it can still solve the task Example - Vacuum cleaner world • Agent's initial state is one of the 8 states: {1,2,3,4,5,6,7,8} • Action sequence: {right, suck, left, suck} • Because agent knows what its actions do, it can discover and reach to goal state. Right  [2.4.6.8.] Suck  {4,8} Left  {3,7} Suck  {7}
  • 22. Partially observable: The agent has limited access to the world state Non-deterministic: The agent is ignorant of the effect of its actions. • Sometimes ignorance prevents the agent from finding a guaranteed solution sequence. • Suppose the agent is in Murphy’s law world The agent has to sense during the execution phase, since things might have changed while it was carrying out an action. This implies that • the agent has to compute a tree of actions, rather than a linear sequence of action Example - Vacuum cleaner world: • Action ‘Suck' deposits dirt on the carpet, but only if there is no dirt already. Depositing dirt rather than sucking returns from ignorance about the effects of actions Contingency problems
  • 23. The agent has no knowledge of the environment • World Partially observable : No knowledge of states (environment) • Unknown state space (no map, no sensor) • Non-deterministic: No knowledge of the effects of its actions Problem faced by (intelligent) agents (like, new-born babies) This is a kind of problem in the real world rather than in a model, which may involve significant danger for an ignorant agent. If the agent survives, it learns about the environment The agent must experiment, learn and build the model of the environment through its results, gradually, discovering • What sort of states exist and What its action do • Then it can use these to solve subsequent (future) problems • Example: in solving Vacuum cleaner world problem the agent learns the state space and effects of its action sequences say: [suck, Right] Exploration problems
  • 24. To define a problem, we need the following elements: states, operators, goal test function and cost function. Well-defined problem and solutions
  • 25.  Goal formulation Is a step that specifies exactly what the agent is trying to achieve. This step narrows down the scope that the agent has to look at.  Problem formulation Is a step that puts down the actions and states that the agent has to consider given a goal (avoiding any redundant states), like: • the initial state • the allowable actions etc…  Search Is the process of looking for the various sequence of actions that lead to a goal state, evaluating them and choosing the optimal sequence.  Execute Is the final step that the agent executes the chosen sequence of actions to get it to the solution/goal
  • 26. AI Group Assignment 30% Write a about Introduction to Robot(Robotics). Contents: introduction, types of robots, robots hardware…. Due date: 21/01/2015 EC group assignmet
  • 27. Tree Searching Actually we can solve this state space search problem by using a tree search algorithm. For example, the tree shown in Figure 3 illustrates the start of the tree search process: at each iteration we select a node. If the node represents a goal state we stop searching. Otherwise we “expand” the selected node (i.e. generate its possible successors using the successor function) and add the successors as child nodes of the selected node. This process continues until either we find a goal state, or there are no nodes left to expand, in which case the search has failed .
  • 29. Search Terminology Problem Space − It is the environment in which the search takes place. (A set of states and set of operators to change those states) Problem Instance − It is Initial state + Goal state. Problem Space Graph − It represents problem state. States are shown by nodes and operators are shown by edges. Depth of a problem − Length of a shortest path or shortest sequence of operators from Initial State to goal state. Admissibility − A property of an algorithm to always find an optimal solution. Branching Factor − The average number of child nodes in the problem space graph. Depth − Length of the shortest path from initial state to goal state.
  • 30. Tree Searching • In the following sections we will examine a number of tree search strategies. For each strategy, we will assess it against a number of criteria: • Completeness: does the algorithm always find a solution if there is one? • Optimality: does the algorithm always find the least-cost solution? • Space Complexity − The maximum number of nodes that are stored in memory. • Time Complexity − The maximum number of nodes that are created.
  • 31. Uninformed Searching • The simplest type of tree search algorithm is called uninformed, or blind, tree search. These algorithms do not use any additional information about states apart from that provided in the problem definition . • These are an inefficient .
  • 32. Breadth-First Search • One of the simplest uninformed tree search strategies is breadth- first search. • Breadth-first search can be implemented by using a FIFO (First-In First-Out) queue for the list of unexpanded nodes. • In breadth first search we always select the minimum depth node for expansion. This has the effect that we “explore” the tree by moving across the breadth of the tree, completely exploring every level before moving down to the next level. Figure 4 illustrates the order of node expansion using breadth-first search on a sample tree with branching factor 3.
  • 34. Summary of breadth-first search analysis  Complete: Yes (assuming b is finite)  Time Complexity: O( bd )  Space complexity: O( bd )  Optimal: Yes, if Step cost = 1 (i.e. no cost/all step costs are same) where b – maximum branching factor in a tree. d – depth of the shallowest (depth of the goal node).
  • 35. Uniform Cost Search Uniform cost search is similar to breadth-first search except that it tries to overcome the limitation of not being optimal when step costs are not identical. Instead of always expanding the minimum depth node, uniform cost search always expands the node with the least path cost. In other words, uniform cost search always expands the node which is “closest” to the initial state, and therefore has the greatest potential for leading to a least- cost solution. If all step costs are identical, uniform cost search is equivalent to breadth-first search .
  • 37. Summary of uniform cost search where b – maximum branching factor in a tree. ϵ – cost of each step c – optimal cost  Complete: Yes (if b is finite and costs are stepped costs are zero)  Time Complexity: O(b(c/ϵ))  Space complexity: O(b(c/ϵ))  Optimal: Yes (b/c it chooses lowest cost)
  • 38. Depth-First Search • Depth-first search can be implemented by using a LIFO (Last-In First-Out) stack for the list of unexpanded node • Depth-first search is an alternative tree search algorithm that has linear space complexity. • With depth-first search we always choose the deepest node for expansion. For example, Figure 5 illustrates the order of node expansion for the same simple tree we saw in Figure 4.
  • 39. Example of Depth-First Search  Complete: NO  Time Complexity: O(bm)  Space complexity: O(bm)  Optimal: YES Where b – maximum branching factor in a tree. m – max depth of any node tree.
  • 40. Depth-limited search  Depth-first search has much better space complexity than breadth-first or uniform cost, but it is not complete for infinite depth trees and it is not optimal. Depth-limited search attempts to overcome the first of these weaknesses. The idea behind depth-limited search is to run a depth-first search but place a limit (a “cut-off”) on the maximum depth to search to. For example, if our cut-off depth is l, then we will never expand any nodes at level l.  Depth-first search does indeed handle infinite depth trees better, but it introduces some new weaknesses. If the goal state is below the cut-off level l it will not be found. Also if the goal is above level l depth-limited search cannot be guaranteed to find the least-cost solution, so it is not optimal.
  • 41. Summary of depth-limited search  Complete: Complete (if solution > depth-limit)  Time Complexity: O(bl)  Space complexity: O(bl)  Optimal: Yes (only if l > d) Where b – maximum branching factor in a tree. l – depth-limit
  • 42. Informed Search • Informed search algorithms attempt to use extra domain knowledge to inform the search, in an attempt to reduce search time. • A particular class of informed search algorithms is known as best-first search. Note that best-first search is not an algorithm itself, but a general approach. In best-first search, we use a heuristic function to estimate which of the nodes in the fringe is the “best” node for expansion. This heuristic function, h(n), estimates the cost of the cheapest path from node n to the goal state. In other words, it tells us which of the nodes in the fringe it think is “closest” to the goal. • We will now examine two similar, but not identical, best-first search algorithms: greedy best-first and A* search.
  • 43. Greedy Best-First Search  The simplest best-first search algorithm is greedy best-first search.  This algorithm simply expands the node that is estimated to be closest to the goal, i.e. the one with the lowest value of the heuristic function h(n) .  For example, let us return to the Romania example we introduced in the previous chapter (the state space is reproduced in Figure 1 for ease of reference). What information can we use to estimate the actual road distance from a city to Bucharest? In other words, what domain knowledge can we use to estimate which of the unexpanded nodes is closest to Bucharest? One possible answer is to use the straight-line distance from each city to Bucharest. Table 1 shows a list of all these distances .
  • 46. Using this information, the greedy best-first search algorithm will select for expansion the node from the unexpanded fringe list with the lowest value of HSLD(n) .
  • 47. Summary of Greedy Best-First Search Completeness: no, can get stuck in loops Optimality: no, can go for non-optimal solutions that look good in the short term  Time complexity: O(bm), but good heuristic can make dramatic improvement  Space complexity: same as time complexity Where b – maximum branching factor in a tree. m – max depth of the node in a tree.
  • 48. A* Search algorithm A* search is similar to greedy best-first search, except that it also takes into account the actual path cost taken so far to reach each node. The node with the lowest estimated total path cost, f(n), is expanded, f(n) = g(n) + h(n) Where g(n) = total actual path cost to get to node n h(n) = estimated path cost to get from node n to goal .
  • 49. Cont.…. Red colored numbers indicates heuristic value for each node and blue colored numbers indicate path cost from one node to the next node. And our initial sate is node A and goal state is node G.
  • 50. Conti… Execution of A* search is given below for the map Ramanian . Step1:  Fringe=[Arad]  Lowest value of evaluation function f(Arad)=0+366=366  Action: expand Arad Step 2:  Fringe=[Sibiu,Timisoara,Zerind]  Lowest value of evaluation function f(Sibiu)=140+253=393  Action: expand Sibiu Step 3:  Fringe=[Timisoara,Zerind,Arad,Fagaras,Oradea,Rimnicu Vilcea]  Lowest value of evaluation function f(Rimnicu Vilcea)=220+193=413  Action: expand Rimnicu Vilcea
  • 51. Cont.… Step 4: Fringe=[Timisoara,Zerind,Arad,Fagaras,Oradea,Craiova,Pitesti,S ibiu]  Lowest value of evaluation function f(Fagaras)=239+176=415 Action: expand Fagaras • Step 5: o Fringe=[Timisoara,Zerind,Arad,Oradea,Craiova,Pitesti,Sibiu,Sibiu ,Bucharest]. • o Lowest value of evaluation function f(Pitesti)=317+100=417 o Action: expand Pitesti
  • 52. Cont.…. Step 6:  Fringe=[Timisoara,Zerind,Arad,Oradea,Craiova,Sibiu,Sibiu,Buc harest, Bucharest,Craiova,Rimnicu Vilcea]  Lowest value of evaluation function f(Bucharest)=418+0=418  Action: find goal at Bucharest . Notice that A* search finds a different (and optimal) solution to greedy best-first search, getting to Bucharest via Sibiu, Rimnicu Vilcea and Pitesti, rather than via Sibiu and Fagaras .
  • 53. Summary of A* Search  Completeness: YES  Optimality: YES  Time complexity: O(bm), but good heuristic can make dramatic improvement  Space complexity: same as time complexity Where b – maximum branching factor in a tree. m – depth of the least-cost tree.
  • 54. Quiz 10% 1. Which searching algorithm do you think that it is better and why? 2. What is the difference b/n blind and heuristics searching algorithms? 3. List and explain the criteria's that can asses or examine the searching algorithm performance.