19CS308T: Artificial Intelligence
UNIT-I Introduction
Faculty:Mr.K.Sundar
Syllabus
• Introduction–Definition – Future of
Artificial Intelligence – Characteristics of
Intelligent Agents– Typical Intelligent
Agents – Problem Solving Approach to
Typical AI problems - Problem solving
Methods – Search Strategies –
Uninformed search.
• Case study : Water Jug Problem,
Travelling Salesman Problem, etc.
What is artificial intelligence?
• Popular conception driven by science ficition
– Robots good at everything except emotions, empathy,
appreciation of art, culture, …
• … until later in the movie.
– Perhaps more representative of human autism than of
(current) real robotics/AI
• “It is my belief that the existence of autism has
contributed to [the theme of the intelligent but soulless
automaton] in no small way.” [Uta Frith, “Autism”]
• Current AI is also bad at lots of simpler stuff!
• There is a lot of AI work on thinking about what other
agents are thinking
Real AI
• A serious science.
• General-purpose AI like the robots of science
fiction is incredibly hard
– Human brain appears to have lots of special and
general functions, integrated in some amazing way
that we really do not understand at all (yet)
• Special-purpose AI is more doable (nontrivial)
– E.g., chess/poker playing programs, logistics
planning, automated translation, voice recognition,
web search, data mining, medical diagnosis,
keeping a car on the road, … … … …
Definitions of AI
Systems that think
like humans
Systems that think
rationally
Systems that act
like humans
Systems that act
rationally
focus on action avoids
philosophical issues
such as “is the system
conscious” etc.
if our system can be
more rational than
humans in some
cases, why not?
• We will follow “act rationally” approach
– Distinction may not be that important
• acting rationally/like a human presumably requires (some
sort of) thinking rationally/like a human,
• humans much more rational anyway in complex domains
“Chinese room”
argument [Searle 1980]
• Person who knows English but not Chinese sits in room
• Receives notes in Chinese
• Has systematic English rule book for how to write new Chinese
characters based on input Chinese characters, returns his notes
– Person=CPU, rule book=AI program, really also need lots of paper (storage)
– Has no understanding of what they mean
– But from the outside, the room gives perfectly reasonable answers in
Chinese!
• Searle’s argument: the room has no intelligence in it!
image from http://www.unc.edu/~prinz/pictures/c-room.gif
Turing Test
• (Human) judge communicates with a human and a
machine over text-only channel,
• Both human and machine try to act like a human,
• Judge tries to tell which is which.
• Numerous variants
• Loebner prize
• Current programs nowhere close to passing this
– http://www.jabberwacky.com/
– http://turingtrade.org/
image from http://en.wikipedia.org/wiki/Turing_test
Turing Test on unsuspecting judges
• It is possible to (temporarily) fool humans who do
not realize they may be talking to a bot
• ELIZA program [Weizenbaum 66] rephrases
partner’s statements and questions
(~psychotherapist)
Modern variant: AOLiza
[fragment from http://archives.cnn.com/2000/TECH/computing/08/29/aoliza.idg/]
five: just asking if richard can tell me about some good lobster diving spots thats
it if this is a problem then forget iot i dont know whats going on here
AOLiza: that is interesting. please continue.
five: continue what just ask richard when he gets home about some lobster
diving up hat way thats all if you got a problem with it then forget it,this isnt worht
it i dont know whats been going on or what bev has told you but i havent done
anything to anybody iam the one that got used and dumped
AOLiza: tell me more about that.
five: about what
AOLiza: what do you think?
five: therers nothing to tell she us3ed me then got tired of me said i wasnt her
type anymore i was possesive and controlling
AOLiza: perhaps i already know you were possesive and controlling.
five: i was huh you nuts i gave her all the room she wanted never told nhewre
what to do or where to go
Is Turing Test the right goal?
• “Aeronautical engineering texts do not define
the goal of their field as making ‘machines that
fly so exactly like pigeons that they can fool
even other pigeons.’” [Russell and Norvig]
Lessons from AI research
• Clearly-defined tasks that we think require intelligence and education
from humans tend to be doable for AI techniques
– Playing chess, drawing logical inferences from clearly-stated facts, performing
probability calculations in well-defined environments, …
– Although, scalability can be a significant issue
• Complex, messy, ambiguous tasks that come natural to humans (in
some cases other animals) are much harder
– Recognizing your grandmother in a crowd, drawing the right conclusion from an
ungrammatical or ambiguous sentence, driving around the city, …
• Humans better at coming up with reasonably good solutions
in complex environments
• Humans better at adapting/self-evaluation/creativity (“My
usual strategy for chess is getting me into trouble against
this person… Why? What else can I do?”)
Early history of AI
• 50s/60s: Early successes! AI can draw logical conclusions,
prove some theorems, create simple plans… Some initial
work on neural networks…
• Led to overhyping: researchers promised funding agencies
spectacular progress, but started running into difficulties:
– Ambiguity: highly funded translation programs (Russian to English)
were good at syntactic manipulation but bad at disambiguation
• “The spirit is willing but the flesh is weak” becomes “The vodka is good but the
meat is rotten”
– Scalability/complexity: early examples were very small, programs could
not scale to bigger instances
– Limitations of representations used
History of AI…
• 70s, 80s: Creation of expert systems (systems
specialized for one particular task based on
experts’ knowledge), wide industry adoption
• Again, overpromising…
• … led to AI winter(s)
– Funding cutbacks, bad reputation
Modern AI
• More rigorous, scientific, formal/mathematical
• Fewer grandiose promises
• Divided into many subareas interested in particular
aspects
• More directly connected to “neighboring” disciplines
– Theoretical computer science, statistics, economics,
operations research, biology, psychology/neuroscience, …
– Often leads to question “Is this really AI”?
• Some senior AI researchers are calling for re-
integration of all these topics, return to more
grandiose goals of AI
– Somewhat risky proposition for graduate students and
junior faculty…
Intelligent Agent
15
Hope Foundation’s International Institute of Information Technology, I²IT, P-14 Rajiv Gandhi Infotech Park,
Hinjawadi, Pune - 411 057
An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that
environment through actuators shown in Figure[1]
Environm
ent Agent
Sensor
input
percep
ts
actio
n
Figure: Agent interaction with environment
• Agent action is decided upon any
input perceived by any agent.
• Agent program is the action taken
against that percept sequence
16
Hope Foundation’s International Institute of Information Technology, I²IT, P-14 Rajiv Gandhi Infotech Park,
Hinjawadi, Pune - 411 057
Agent Function and Agent Program
Good behaviour
1. Rationality: Agent selects action which
maximizes the performance
2. Learn: Agent should learn from perceive
sequences
3. Omniscient: Agent should know the
outcome of its actions
4. Autonomous: Agent should compensate
for half/inaccurate knowledge
17
Hope Foundation’s International Institute of Information Technology, I²IT, P-14 Rajiv Gandhi Infotech Park,
Hinjawadi, Pune - 411 057
• Specifying the task
environment
• task environment specification
includes the performance
measure, the external
environment, the actuators,
and the sensors 18
Hope Foundation’s International Institute of Information Technology, I²IT, P-14 Rajiv Gandhi Infotech Park,
Hinjawadi, Pune - 411 057
Nature of environments
• Task performance can be
measured by following
parameters
1. PEAS (Performance,
Environment, Actuators,
Sensors) description
19
Hope Foundation’s International Institute of Information Technology, I²IT, P-14 Rajiv Gandhi Infotech Park,
Hinjawadi, Pune - 411 057
Nature of environments
• Task performance can be
measured by following
parameters
1. PEAS (Performance,
Environnent, actuator,
sensors) description
20
Hope Foundation’s International Institute of Information Technology, I²IT, P-14 Rajiv Gandhi Infotech Park,
Hinjawadi, Pune - 411 057
Nature of environments
Search
• We have some actions that can change the state
of the world
– Change induced by an action perfectly predictable
• Try to come up with a sequence of actions that will
lead us to a goal state
– May want to minimize number of actions
– More generally, may want to minimize total cost of actions
• Do not need to execute actions in real life while
searching for solution!
– Everything perfectly predictable anyway
A simple example:
traveling on a graph
A
B
C
F
D E
3 4
4
3
9
2
2
start state
goal state
Searching for a solution
A
B
C
F
D
3
3
9
2
2
start state
goal state
Search tree
state = A,
cost = 0
state = B,
cost = 3
state = D,
cost = 3
state = C,
cost = 5
state = F,
cost = 12
state = A,
cost = 7
goal state!
search tree nodes and states are not the same thing!
Full search tree
state = A,
cost = 0
state = B,
cost = 3
state = D,
cost = 3
state = C,
cost = 5
state = F,
cost = 12
state = A,
cost = 7
goal state!
state = E,
cost = 7
state = F,
cost = 11
goal state!
state = B,
cost = 10
state = D,
cost = 10
.
.
.
.
.
.
Changing the goal:
want to visit all vertices on the graph
A
B
C
F
D E
3 4
4
3
9
2
2
need a different definition of a state
“currently at A, also visited B, C already”
large number of states: n*2n-1
could turn these into a graph, but…
Full search tree
state = A, {}
cost = 0
state = B, {A}
cost = 3
state = D, {A}
cost = 3
state = C, {A, B}
cost = 5
state = F, {A, B}
cost = 12
state = A, {B, C}
cost = 7
state = E, {A, D}
cost = 7
state = F, {A, D, E}
cost = 11
state = B, {A, C}
cost = 10
state = D, {A, B, C}
cost = 10
.
.
.
.
.
.
What would happen if the
goal were to visit every
location twice?
Key concepts in search
• Set of states that we can be in
– Including an initial state…
– … and goal states (equivalently, a goal test)
• For every state, a set of actions that we can take
– Each action results in a new state
– Typically defined by successor function
• Given a state, produces all states that can be reached from it
• Cost function that determines the cost of each
action (or path = sequence of actions)
• Solution: path from initial state to a goal state
– Optimal solution: solution with minimal cost
8-puzzle
1 2 3
4 5 6
7 8
1 2
4 5 3
7 8 6
goal state
8-puzzle
1 2
4 5 3
7 8 6
1 5 2
4 3
7 8 6
1 2
4 5 3
7 8 6
1 2
4 5 3
7 8 6
.
.
.
.
.
.
Generic search algorithm
• Fringe = set of nodes generated but not expanded
• fringe := {initial state}
• loop:
– if fringe empty, declare failure
– choose and remove a node v from fringe
– check if v’s state s is a goal state; if so, declare success
– if not, expand v, insert resulting nodes into fringe
• Key question in search: Which of the generated
nodes do we expand next?
Uninformed search
• Given a state, we only know whether it is a goal
state or not
• Cannot say one nongoal state looks better than
another nongoal state
• Can only traverse state space blindly in hope of
somehow hitting a goal state at some point
– Also called blind search
– Blind does not imply unsystematic!
Breadth-first search
Properties of breadth-first search
• Nodes are expanded in the same order in which they are
generated
– Fringe can be maintained as a First-In-First-Out (FIFO) queue
• BFS is complete: if a solution exists, one will be found
• BFS finds a shallowest solution
– Not necessarily an optimal solution
• If every node has b successors (the branching factor),
first solution is at depth d, then fringe size will be at least
bd at some point
– This much space (and time) required 
Depth-first search
Implementing depth-first search
• Fringe can be maintained as a Last-In-First-Out (LIFO)
queue (aka. a stack)
• Also easy to implement recursively:
• DFS(node)
– If goal(node) return solution(node);
– For each successor of node
• Return DFS(successor) unless it is failure;
– Return failure;
Properties of depth-first search
• Not complete (might cycle through nongoal states)
• If solution found, generally not optimal/shallowest
• If every node has b successors (the branching
factor), and we search to at most depth m, fringe
is at most bm
– Much better space requirement 
– Actually, generally don’t even need to store all of fringe
• Time: still need to look at every node
– bm + bm-1 + … + 1 (for b>1, O(bm))
– Inevitable for uninformed search methods…
Combining good properties of BFS and DFS
• Limited depth DFS: just like DFS, except never go deeper
than some depth d
• Iterative deepening DFS:
– Call limited depth DFS with depth 0;
– If unsuccessful, call with depth 1;
– If unsuccessful, call with depth 2;
– Etc.
• Complete, finds shallowest solution
• Space requirements of DFS
• May seem wasteful timewise because replicating effort
– Really not that wasteful because almost all effort at deepest level
– db + (d-1)b2 + (d-2)b3 + ... + 1bd is O(bd) for b > 1
Let’s start thinking about cost
• BFS finds shallowest solution because always works on
shallowest nodes first
• Similar idea: always work on the lowest-cost node first
(uniform-cost search)
• Will find optimal solution (assuming costs increase by at
least constant amount along path)
• Will often pursue lots of short steps first
• If optimal cost is C, and cost increases by at least L each
step, we can go to depth C/L
• Similar memory problems as BFS
– Iterative lengthening DFS does DFS up to increasing costs
Searching backwards from the goal
• Sometimes can search backwards from the goal
– Maze puzzles
– Eights puzzle
– Reaching location F
– What about the goal of “having visited all locations”?
• Need to be able to compute predecessors instead
of successors
• What’s the point?
Predecessor branching factor can be
smaller than successor branching factor
• Stacking blocks:
– only action is to add something to the stack
A
B
C
In hand: nothing
In hand: A, B, C
Start state Goal state
We’ll see more of this…
Bidirectional search
• Even better: search from both the start and the
goal, in parallel!
• If the shallowest solution has depth d and
branching factor is b on both sides, requires only
O(bd/2) nodes to be explored!
image from cs-alb-pc3.massey.ac.nz/notes/59302/fig03.17.gif
Making bidirectional search work
• Need to be able to figure out whether the fringes
intersect
– Need to keep at least one fringe in memory…
• Other than that, can do various kinds of search on
either tree, and get the corresponding optimality
etc. guarantees
• Not possible (feasible) if backwards search not
possible (feasible)
– Hard to compute predecessors
– High predecessor branching factor
– Too many goal states
Repeated states
• Repeated states can cause incompleteness or enormous
runtimes
• Can maintain list of previously visited states to avoid this
– If new path to the same state has greater cost, don’t pursue it further
– Leads to time/space tradeoff
• “Algorithms that forget their history are doomed to repeat
it” [Russell and Norvig]
A
B
C
3
2
2
cycles exponentially large search trees (try it!)

Unit I Introduction to AI K.Sundar,AP/CSE,VEC

  • 1.
    19CS308T: Artificial Intelligence UNIT-IIntroduction Faculty:Mr.K.Sundar
  • 2.
    Syllabus • Introduction–Definition –Future of Artificial Intelligence – Characteristics of Intelligent Agents– Typical Intelligent Agents – Problem Solving Approach to Typical AI problems - Problem solving Methods – Search Strategies – Uninformed search. • Case study : Water Jug Problem, Travelling Salesman Problem, etc.
  • 3.
    What is artificialintelligence? • Popular conception driven by science ficition – Robots good at everything except emotions, empathy, appreciation of art, culture, … • … until later in the movie. – Perhaps more representative of human autism than of (current) real robotics/AI • “It is my belief that the existence of autism has contributed to [the theme of the intelligent but soulless automaton] in no small way.” [Uta Frith, “Autism”] • Current AI is also bad at lots of simpler stuff! • There is a lot of AI work on thinking about what other agents are thinking
  • 4.
    Real AI • Aserious science. • General-purpose AI like the robots of science fiction is incredibly hard – Human brain appears to have lots of special and general functions, integrated in some amazing way that we really do not understand at all (yet) • Special-purpose AI is more doable (nontrivial) – E.g., chess/poker playing programs, logistics planning, automated translation, voice recognition, web search, data mining, medical diagnosis, keeping a car on the road, … … … …
  • 5.
    Definitions of AI Systemsthat think like humans Systems that think rationally Systems that act like humans Systems that act rationally focus on action avoids philosophical issues such as “is the system conscious” etc. if our system can be more rational than humans in some cases, why not? • We will follow “act rationally” approach – Distinction may not be that important • acting rationally/like a human presumably requires (some sort of) thinking rationally/like a human, • humans much more rational anyway in complex domains
  • 6.
    “Chinese room” argument [Searle1980] • Person who knows English but not Chinese sits in room • Receives notes in Chinese • Has systematic English rule book for how to write new Chinese characters based on input Chinese characters, returns his notes – Person=CPU, rule book=AI program, really also need lots of paper (storage) – Has no understanding of what they mean – But from the outside, the room gives perfectly reasonable answers in Chinese! • Searle’s argument: the room has no intelligence in it! image from http://www.unc.edu/~prinz/pictures/c-room.gif
  • 7.
    Turing Test • (Human)judge communicates with a human and a machine over text-only channel, • Both human and machine try to act like a human, • Judge tries to tell which is which. • Numerous variants • Loebner prize • Current programs nowhere close to passing this – http://www.jabberwacky.com/ – http://turingtrade.org/ image from http://en.wikipedia.org/wiki/Turing_test
  • 8.
    Turing Test onunsuspecting judges • It is possible to (temporarily) fool humans who do not realize they may be talking to a bot • ELIZA program [Weizenbaum 66] rephrases partner’s statements and questions (~psychotherapist)
  • 9.
    Modern variant: AOLiza [fragmentfrom http://archives.cnn.com/2000/TECH/computing/08/29/aoliza.idg/] five: just asking if richard can tell me about some good lobster diving spots thats it if this is a problem then forget iot i dont know whats going on here AOLiza: that is interesting. please continue. five: continue what just ask richard when he gets home about some lobster diving up hat way thats all if you got a problem with it then forget it,this isnt worht it i dont know whats been going on or what bev has told you but i havent done anything to anybody iam the one that got used and dumped AOLiza: tell me more about that. five: about what AOLiza: what do you think? five: therers nothing to tell she us3ed me then got tired of me said i wasnt her type anymore i was possesive and controlling AOLiza: perhaps i already know you were possesive and controlling. five: i was huh you nuts i gave her all the room she wanted never told nhewre what to do or where to go
  • 10.
    Is Turing Testthe right goal? • “Aeronautical engineering texts do not define the goal of their field as making ‘machines that fly so exactly like pigeons that they can fool even other pigeons.’” [Russell and Norvig]
  • 11.
    Lessons from AIresearch • Clearly-defined tasks that we think require intelligence and education from humans tend to be doable for AI techniques – Playing chess, drawing logical inferences from clearly-stated facts, performing probability calculations in well-defined environments, … – Although, scalability can be a significant issue • Complex, messy, ambiguous tasks that come natural to humans (in some cases other animals) are much harder – Recognizing your grandmother in a crowd, drawing the right conclusion from an ungrammatical or ambiguous sentence, driving around the city, … • Humans better at coming up with reasonably good solutions in complex environments • Humans better at adapting/self-evaluation/creativity (“My usual strategy for chess is getting me into trouble against this person… Why? What else can I do?”)
  • 12.
    Early history ofAI • 50s/60s: Early successes! AI can draw logical conclusions, prove some theorems, create simple plans… Some initial work on neural networks… • Led to overhyping: researchers promised funding agencies spectacular progress, but started running into difficulties: – Ambiguity: highly funded translation programs (Russian to English) were good at syntactic manipulation but bad at disambiguation • “The spirit is willing but the flesh is weak” becomes “The vodka is good but the meat is rotten” – Scalability/complexity: early examples were very small, programs could not scale to bigger instances – Limitations of representations used
  • 13.
    History of AI… •70s, 80s: Creation of expert systems (systems specialized for one particular task based on experts’ knowledge), wide industry adoption • Again, overpromising… • … led to AI winter(s) – Funding cutbacks, bad reputation
  • 14.
    Modern AI • Morerigorous, scientific, formal/mathematical • Fewer grandiose promises • Divided into many subareas interested in particular aspects • More directly connected to “neighboring” disciplines – Theoretical computer science, statistics, economics, operations research, biology, psychology/neuroscience, … – Often leads to question “Is this really AI”? • Some senior AI researchers are calling for re- integration of all these topics, return to more grandiose goals of AI – Somewhat risky proposition for graduate students and junior faculty…
  • 15.
    Intelligent Agent 15 Hope Foundation’sInternational Institute of Information Technology, I²IT, P-14 Rajiv Gandhi Infotech Park, Hinjawadi, Pune - 411 057 An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators shown in Figure[1] Environm ent Agent Sensor input percep ts actio n Figure: Agent interaction with environment
  • 16.
    • Agent actionis decided upon any input perceived by any agent. • Agent program is the action taken against that percept sequence 16 Hope Foundation’s International Institute of Information Technology, I²IT, P-14 Rajiv Gandhi Infotech Park, Hinjawadi, Pune - 411 057 Agent Function and Agent Program
  • 17.
    Good behaviour 1. Rationality:Agent selects action which maximizes the performance 2. Learn: Agent should learn from perceive sequences 3. Omniscient: Agent should know the outcome of its actions 4. Autonomous: Agent should compensate for half/inaccurate knowledge 17 Hope Foundation’s International Institute of Information Technology, I²IT, P-14 Rajiv Gandhi Infotech Park, Hinjawadi, Pune - 411 057
  • 18.
    • Specifying thetask environment • task environment specification includes the performance measure, the external environment, the actuators, and the sensors 18 Hope Foundation’s International Institute of Information Technology, I²IT, P-14 Rajiv Gandhi Infotech Park, Hinjawadi, Pune - 411 057 Nature of environments
  • 19.
    • Task performancecan be measured by following parameters 1. PEAS (Performance, Environment, Actuators, Sensors) description 19 Hope Foundation’s International Institute of Information Technology, I²IT, P-14 Rajiv Gandhi Infotech Park, Hinjawadi, Pune - 411 057 Nature of environments
  • 20.
    • Task performancecan be measured by following parameters 1. PEAS (Performance, Environnent, actuator, sensors) description 20 Hope Foundation’s International Institute of Information Technology, I²IT, P-14 Rajiv Gandhi Infotech Park, Hinjawadi, Pune - 411 057 Nature of environments
  • 21.
    Search • We havesome actions that can change the state of the world – Change induced by an action perfectly predictable • Try to come up with a sequence of actions that will lead us to a goal state – May want to minimize number of actions – More generally, may want to minimize total cost of actions • Do not need to execute actions in real life while searching for solution! – Everything perfectly predictable anyway
  • 22.
    A simple example: travelingon a graph A B C F D E 3 4 4 3 9 2 2 start state goal state
  • 23.
    Searching for asolution A B C F D 3 3 9 2 2 start state goal state
  • 24.
    Search tree state =A, cost = 0 state = B, cost = 3 state = D, cost = 3 state = C, cost = 5 state = F, cost = 12 state = A, cost = 7 goal state! search tree nodes and states are not the same thing!
  • 25.
    Full search tree state= A, cost = 0 state = B, cost = 3 state = D, cost = 3 state = C, cost = 5 state = F, cost = 12 state = A, cost = 7 goal state! state = E, cost = 7 state = F, cost = 11 goal state! state = B, cost = 10 state = D, cost = 10 . . . . . .
  • 26.
    Changing the goal: wantto visit all vertices on the graph A B C F D E 3 4 4 3 9 2 2 need a different definition of a state “currently at A, also visited B, C already” large number of states: n*2n-1 could turn these into a graph, but…
  • 27.
    Full search tree state= A, {} cost = 0 state = B, {A} cost = 3 state = D, {A} cost = 3 state = C, {A, B} cost = 5 state = F, {A, B} cost = 12 state = A, {B, C} cost = 7 state = E, {A, D} cost = 7 state = F, {A, D, E} cost = 11 state = B, {A, C} cost = 10 state = D, {A, B, C} cost = 10 . . . . . . What would happen if the goal were to visit every location twice?
  • 28.
    Key concepts insearch • Set of states that we can be in – Including an initial state… – … and goal states (equivalently, a goal test) • For every state, a set of actions that we can take – Each action results in a new state – Typically defined by successor function • Given a state, produces all states that can be reached from it • Cost function that determines the cost of each action (or path = sequence of actions) • Solution: path from initial state to a goal state – Optimal solution: solution with minimal cost
  • 29.
    8-puzzle 1 2 3 45 6 7 8 1 2 4 5 3 7 8 6 goal state
  • 30.
    8-puzzle 1 2 4 53 7 8 6 1 5 2 4 3 7 8 6 1 2 4 5 3 7 8 6 1 2 4 5 3 7 8 6 . . . . . .
  • 31.
    Generic search algorithm •Fringe = set of nodes generated but not expanded • fringe := {initial state} • loop: – if fringe empty, declare failure – choose and remove a node v from fringe – check if v’s state s is a goal state; if so, declare success – if not, expand v, insert resulting nodes into fringe • Key question in search: Which of the generated nodes do we expand next?
  • 32.
    Uninformed search • Givena state, we only know whether it is a goal state or not • Cannot say one nongoal state looks better than another nongoal state • Can only traverse state space blindly in hope of somehow hitting a goal state at some point – Also called blind search – Blind does not imply unsystematic!
  • 33.
  • 34.
    Properties of breadth-firstsearch • Nodes are expanded in the same order in which they are generated – Fringe can be maintained as a First-In-First-Out (FIFO) queue • BFS is complete: if a solution exists, one will be found • BFS finds a shallowest solution – Not necessarily an optimal solution • If every node has b successors (the branching factor), first solution is at depth d, then fringe size will be at least bd at some point – This much space (and time) required 
  • 35.
  • 36.
    Implementing depth-first search •Fringe can be maintained as a Last-In-First-Out (LIFO) queue (aka. a stack) • Also easy to implement recursively: • DFS(node) – If goal(node) return solution(node); – For each successor of node • Return DFS(successor) unless it is failure; – Return failure;
  • 37.
    Properties of depth-firstsearch • Not complete (might cycle through nongoal states) • If solution found, generally not optimal/shallowest • If every node has b successors (the branching factor), and we search to at most depth m, fringe is at most bm – Much better space requirement  – Actually, generally don’t even need to store all of fringe • Time: still need to look at every node – bm + bm-1 + … + 1 (for b>1, O(bm)) – Inevitable for uninformed search methods…
  • 38.
    Combining good propertiesof BFS and DFS • Limited depth DFS: just like DFS, except never go deeper than some depth d • Iterative deepening DFS: – Call limited depth DFS with depth 0; – If unsuccessful, call with depth 1; – If unsuccessful, call with depth 2; – Etc. • Complete, finds shallowest solution • Space requirements of DFS • May seem wasteful timewise because replicating effort – Really not that wasteful because almost all effort at deepest level – db + (d-1)b2 + (d-2)b3 + ... + 1bd is O(bd) for b > 1
  • 39.
    Let’s start thinkingabout cost • BFS finds shallowest solution because always works on shallowest nodes first • Similar idea: always work on the lowest-cost node first (uniform-cost search) • Will find optimal solution (assuming costs increase by at least constant amount along path) • Will often pursue lots of short steps first • If optimal cost is C, and cost increases by at least L each step, we can go to depth C/L • Similar memory problems as BFS – Iterative lengthening DFS does DFS up to increasing costs
  • 40.
    Searching backwards fromthe goal • Sometimes can search backwards from the goal – Maze puzzles – Eights puzzle – Reaching location F – What about the goal of “having visited all locations”? • Need to be able to compute predecessors instead of successors • What’s the point?
  • 41.
    Predecessor branching factorcan be smaller than successor branching factor • Stacking blocks: – only action is to add something to the stack A B C In hand: nothing In hand: A, B, C Start state Goal state We’ll see more of this…
  • 42.
    Bidirectional search • Evenbetter: search from both the start and the goal, in parallel! • If the shallowest solution has depth d and branching factor is b on both sides, requires only O(bd/2) nodes to be explored! image from cs-alb-pc3.massey.ac.nz/notes/59302/fig03.17.gif
  • 43.
    Making bidirectional searchwork • Need to be able to figure out whether the fringes intersect – Need to keep at least one fringe in memory… • Other than that, can do various kinds of search on either tree, and get the corresponding optimality etc. guarantees • Not possible (feasible) if backwards search not possible (feasible) – Hard to compute predecessors – High predecessor branching factor – Too many goal states
  • 44.
    Repeated states • Repeatedstates can cause incompleteness or enormous runtimes • Can maintain list of previously visited states to avoid this – If new path to the same state has greater cost, don’t pursue it further – Leads to time/space tradeoff • “Algorithms that forget their history are doomed to repeat it” [Russell and Norvig] A B C 3 2 2 cycles exponentially large search trees (try it!)