2. Syllabus
ā¢ IntroductionāDefinition ā Future of
Artificial Intelligence ā Characteristics of
Intelligent Agentsā Typical Intelligent
Agents ā Problem Solving Approach to
Typical AI problems - Problem solving
Methods ā Search Strategies ā
Uninformed search.
ā¢ Case study : Water Jug Problem,
Travelling Salesman Problem, etc.
3. What is artificial intelligence?
ā¢ Popular conception driven by science ficition
ā Robots good at everything except emotions, empathy,
appreciation of art, culture, ā¦
ā¢ ā¦ until later in the movie.
ā Perhaps more representative of human autism than of
(current) real robotics/AI
ā¢ āIt is my belief that the existence of autism has
contributed to [the theme of the intelligent but soulless
automaton] in no small way.ā [Uta Frith, āAutismā]
ā¢ Current AI is also bad at lots of simpler stuff!
ā¢ There is a lot of AI work on thinking about what other
agents are thinking
4. Real AI
ā¢ A serious science.
ā¢ General-purpose AI like the robots of science
fiction is incredibly hard
ā Human brain appears to have lots of special and
general functions, integrated in some amazing way
that we really do not understand at all (yet)
ā¢ Special-purpose AI is more doable (nontrivial)
ā E.g., chess/poker playing programs, logistics
planning, automated translation, voice recognition,
web search, data mining, medical diagnosis,
keeping a car on the road, ā¦ ā¦ ā¦ ā¦
5. Definitions of AI
Systems that think
like humans
Systems that think
rationally
Systems that act
like humans
Systems that act
rationally
focus on action avoids
philosophical issues
such as āis the system
consciousā etc.
if our system can be
more rational than
humans in some
cases, why not?
ā¢ We will follow āact rationallyā approach
ā Distinction may not be that important
ā¢ acting rationally/like a human presumably requires (some
sort of) thinking rationally/like a human,
ā¢ humans much more rational anyway in complex domains
6. āChinese roomā
argument [Searle 1980]
ā¢ Person who knows English but not Chinese sits in room
ā¢ Receives notes in Chinese
ā¢ Has systematic English rule book for how to write new Chinese
characters based on input Chinese characters, returns his notes
ā Person=CPU, rule book=AI program, really also need lots of paper (storage)
ā Has no understanding of what they mean
ā But from the outside, the room gives perfectly reasonable answers in
Chinese!
ā¢ Searleās argument: the room has no intelligence in it!
image from http://www.unc.edu/~prinz/pictures/c-room.gif
7. Turing Test
ā¢ (Human) judge communicates with a human and a
machine over text-only channel,
ā¢ Both human and machine try to act like a human,
ā¢ Judge tries to tell which is which.
ā¢ Numerous variants
ā¢ Loebner prize
ā¢ Current programs nowhere close to passing this
ā http://www.jabberwacky.com/
ā http://turingtrade.org/
image from http://en.wikipedia.org/wiki/Turing_test
8. Turing Test on unsuspecting judges
ā¢ It is possible to (temporarily) fool humans who do
not realize they may be talking to a bot
ā¢ ELIZA program [Weizenbaum 66] rephrases
partnerās statements and questions
(~psychotherapist)
9. Modern variant: AOLiza
[fragment from http://archives.cnn.com/2000/TECH/computing/08/29/aoliza.idg/]
five: just asking if richard can tell me about some good lobster diving spots thats
it if this is a problem then forget iot i dont know whats going on here
AOLiza: that is interesting. please continue.
five: continue what just ask richard when he gets home about some lobster
diving up hat way thats all if you got a problem with it then forget it,this isnt worht
it i dont know whats been going on or what bev has told you but i havent done
anything to anybody iam the one that got used and dumped
AOLiza: tell me more about that.
five: about what
AOLiza: what do you think?
five: therers nothing to tell she us3ed me then got tired of me said i wasnt her
type anymore i was possesive and controlling
AOLiza: perhaps i already know you were possesive and controlling.
five: i was huh you nuts i gave her all the room she wanted never told nhewre
what to do or where to go
10. Is Turing Test the right goal?
ā¢ āAeronautical engineering texts do not define
the goal of their field as making āmachines that
fly so exactly like pigeons that they can fool
even other pigeons.āā [Russell and Norvig]
11. Lessons from AI research
ā¢ Clearly-defined tasks that we think require intelligence and education
from humans tend to be doable for AI techniques
ā Playing chess, drawing logical inferences from clearly-stated facts, performing
probability calculations in well-defined environments, ā¦
ā Although, scalability can be a significant issue
ā¢ Complex, messy, ambiguous tasks that come natural to humans (in
some cases other animals) are much harder
ā Recognizing your grandmother in a crowd, drawing the right conclusion from an
ungrammatical or ambiguous sentence, driving around the city, ā¦
ā¢ Humans better at coming up with reasonably good solutions
in complex environments
ā¢ Humans better at adapting/self-evaluation/creativity (āMy
usual strategy for chess is getting me into trouble against
this personā¦ Why? What else can I do?ā)
12. Early history of AI
ā¢ 50s/60s: Early successes! AI can draw logical conclusions,
prove some theorems, create simple plansā¦ Some initial
work on neural networksā¦
ā¢ Led to overhyping: researchers promised funding agencies
spectacular progress, but started running into difficulties:
ā Ambiguity: highly funded translation programs (Russian to English)
were good at syntactic manipulation but bad at disambiguation
ā¢ āThe spirit is willing but the flesh is weakā becomes āThe vodka is good but the
meat is rottenā
ā Scalability/complexity: early examples were very small, programs could
not scale to bigger instances
ā Limitations of representations used
13. History of AIā¦
ā¢ 70s, 80s: Creation of expert systems (systems
specialized for one particular task based on
expertsā knowledge), wide industry adoption
ā¢ Again, overpromisingā¦
ā¢ ā¦ led to AI winter(s)
ā Funding cutbacks, bad reputation
14. Modern AI
ā¢ More rigorous, scientific, formal/mathematical
ā¢ Fewer grandiose promises
ā¢ Divided into many subareas interested in particular
aspects
ā¢ More directly connected to āneighboringā disciplines
ā Theoretical computer science, statistics, economics,
operations research, biology, psychology/neuroscience, ā¦
ā Often leads to question āIs this really AIā?
ā¢ Some senior AI researchers are calling for re-
integration of all these topics, return to more
grandiose goals of AI
ā Somewhat risky proposition for graduate students and
junior facultyā¦
15. Intelligent Agent
15
Hope Foundationās International Institute of Information Technology, IĀ²IT, P-14 Rajiv Gandhi Infotech Park,
Hinjawadi, Pune - 411 057
An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that
environment through actuators shown in Figure[1]
Environm
ent Agent
Sensor
input
percep
ts
actio
n
Figure: Agent interaction with environment
16. ā¢ Agent action is decided upon any
input perceived by any agent.
ā¢ Agent program is the action taken
against that percept sequence
16
Hope Foundationās International Institute of Information Technology, IĀ²IT, P-14 Rajiv Gandhi Infotech Park,
Hinjawadi, Pune - 411 057
Agent Function and Agent Program
17. Good behaviour
1. Rationality: Agent selects action which
maximizes the performance
2. Learn: Agent should learn from perceive
sequences
3. Omniscient: Agent should know the
outcome of its actions
4. Autonomous: Agent should compensate
for half/inaccurate knowledge
17
Hope Foundationās International Institute of Information Technology, IĀ²IT, P-14 Rajiv Gandhi Infotech Park,
Hinjawadi, Pune - 411 057
18. ā¢ Specifying the task
environment
ā¢ task environment specification
includes the performance
measure, the external
environment, the actuators,
and the sensors 18
Hope Foundationās International Institute of Information Technology, IĀ²IT, P-14 Rajiv Gandhi Infotech Park,
Hinjawadi, Pune - 411 057
Nature of environments
19. ā¢ Task performance can be
measured by following
parameters
1. PEAS (Performance,
Environment, Actuators,
Sensors) description
19
Hope Foundationās International Institute of Information Technology, IĀ²IT, P-14 Rajiv Gandhi Infotech Park,
Hinjawadi, Pune - 411 057
Nature of environments
20. ā¢ Task performance can be
measured by following
parameters
1. PEAS (Performance,
Environnent, actuator,
sensors) description
20
Hope Foundationās International Institute of Information Technology, IĀ²IT, P-14 Rajiv Gandhi Infotech Park,
Hinjawadi, Pune - 411 057
Nature of environments
21. Search
ā¢ We have some actions that can change the state
of the world
ā Change induced by an action perfectly predictable
ā¢ Try to come up with a sequence of actions that will
lead us to a goal state
ā May want to minimize number of actions
ā More generally, may want to minimize total cost of actions
ā¢ Do not need to execute actions in real life while
searching for solution!
ā Everything perfectly predictable anyway
23. Searching for a solution
A
B
C
F
D
3
3
9
2
2
start state
goal state
24. Search tree
state = A,
cost = 0
state = B,
cost = 3
state = D,
cost = 3
state = C,
cost = 5
state = F,
cost = 12
state = A,
cost = 7
goal state!
search tree nodes and states are not the same thing!
25. Full search tree
state = A,
cost = 0
state = B,
cost = 3
state = D,
cost = 3
state = C,
cost = 5
state = F,
cost = 12
state = A,
cost = 7
goal state!
state = E,
cost = 7
state = F,
cost = 11
goal state!
state = B,
cost = 10
state = D,
cost = 10
.
.
.
.
.
.
26. Changing the goal:
want to visit all vertices on the graph
A
B
C
F
D E
3 4
4
3
9
2
2
need a different definition of a state
ācurrently at A, also visited B, C alreadyā
large number of states: n*2n-1
could turn these into a graph, butā¦
27. Full search tree
state = A, {}
cost = 0
state = B, {A}
cost = 3
state = D, {A}
cost = 3
state = C, {A, B}
cost = 5
state = F, {A, B}
cost = 12
state = A, {B, C}
cost = 7
state = E, {A, D}
cost = 7
state = F, {A, D, E}
cost = 11
state = B, {A, C}
cost = 10
state = D, {A, B, C}
cost = 10
.
.
.
.
.
.
What would happen if the
goal were to visit every
location twice?
28. Key concepts in search
ā¢ Set of states that we can be in
ā Including an initial stateā¦
ā ā¦ and goal states (equivalently, a goal test)
ā¢ For every state, a set of actions that we can take
ā Each action results in a new state
ā Typically defined by successor function
ā¢ Given a state, produces all states that can be reached from it
ā¢ Cost function that determines the cost of each
action (or path = sequence of actions)
ā¢ Solution: path from initial state to a goal state
ā Optimal solution: solution with minimal cost
31. Generic search algorithm
ā¢ Fringe = set of nodes generated but not expanded
ā¢ fringe := {initial state}
ā¢ loop:
ā if fringe empty, declare failure
ā choose and remove a node v from fringe
ā check if vās state s is a goal state; if so, declare success
ā if not, expand v, insert resulting nodes into fringe
ā¢ Key question in search: Which of the generated
nodes do we expand next?
32. Uninformed search
ā¢ Given a state, we only know whether it is a goal
state or not
ā¢ Cannot say one nongoal state looks better than
another nongoal state
ā¢ Can only traverse state space blindly in hope of
somehow hitting a goal state at some point
ā Also called blind search
ā Blind does not imply unsystematic!
34. Properties of breadth-first search
ā¢ Nodes are expanded in the same order in which they are
generated
ā Fringe can be maintained as a First-In-First-Out (FIFO) queue
ā¢ BFS is complete: if a solution exists, one will be found
ā¢ BFS finds a shallowest solution
ā Not necessarily an optimal solution
ā¢ If every node has b successors (the branching factor),
first solution is at depth d, then fringe size will be at least
bd at some point
ā This much space (and time) required ļ
36. Implementing depth-first search
ā¢ Fringe can be maintained as a Last-In-First-Out (LIFO)
queue (aka. a stack)
ā¢ Also easy to implement recursively:
ā¢ DFS(node)
ā If goal(node) return solution(node);
ā For each successor of node
ā¢ Return DFS(successor) unless it is failure;
ā Return failure;
37. Properties of depth-first search
ā¢ Not complete (might cycle through nongoal states)
ā¢ If solution found, generally not optimal/shallowest
ā¢ If every node has b successors (the branching
factor), and we search to at most depth m, fringe
is at most bm
ā Much better space requirement ļ
ā Actually, generally donāt even need to store all of fringe
ā¢ Time: still need to look at every node
ā bm + bm-1 + ā¦ + 1 (for b>1, O(bm))
ā Inevitable for uninformed search methodsā¦
38. Combining good properties of BFS and DFS
ā¢ Limited depth DFS: just like DFS, except never go deeper
than some depth d
ā¢ Iterative deepening DFS:
ā Call limited depth DFS with depth 0;
ā If unsuccessful, call with depth 1;
ā If unsuccessful, call with depth 2;
ā Etc.
ā¢ Complete, finds shallowest solution
ā¢ Space requirements of DFS
ā¢ May seem wasteful timewise because replicating effort
ā Really not that wasteful because almost all effort at deepest level
ā db + (d-1)b2 + (d-2)b3 + ... + 1bd is O(bd) for b > 1
39. Letās start thinking about cost
ā¢ BFS finds shallowest solution because always works on
shallowest nodes first
ā¢ Similar idea: always work on the lowest-cost node first
(uniform-cost search)
ā¢ Will find optimal solution (assuming costs increase by at
least constant amount along path)
ā¢ Will often pursue lots of short steps first
ā¢ If optimal cost is C, and cost increases by at least L each
step, we can go to depth C/L
ā¢ Similar memory problems as BFS
ā Iterative lengthening DFS does DFS up to increasing costs
40. Searching backwards from the goal
ā¢ Sometimes can search backwards from the goal
ā Maze puzzles
ā Eights puzzle
ā Reaching location F
ā What about the goal of āhaving visited all locationsā?
ā¢ Need to be able to compute predecessors instead
of successors
ā¢ Whatās the point?
41. Predecessor branching factor can be
smaller than successor branching factor
ā¢ Stacking blocks:
ā only action is to add something to the stack
A
B
C
In hand: nothing
In hand: A, B, C
Start state Goal state
Weāll see more of thisā¦
42. Bidirectional search
ā¢ Even better: search from both the start and the
goal, in parallel!
ā¢ If the shallowest solution has depth d and
branching factor is b on both sides, requires only
O(bd/2) nodes to be explored!
image from cs-alb-pc3.massey.ac.nz/notes/59302/fig03.17.gif
43. Making bidirectional search work
ā¢ Need to be able to figure out whether the fringes
intersect
ā Need to keep at least one fringe in memoryā¦
ā¢ Other than that, can do various kinds of search on
either tree, and get the corresponding optimality
etc. guarantees
ā¢ Not possible (feasible) if backwards search not
possible (feasible)
ā Hard to compute predecessors
ā High predecessor branching factor
ā Too many goal states
44. Repeated states
ā¢ Repeated states can cause incompleteness or enormous
runtimes
ā¢ Can maintain list of previously visited states to avoid this
ā If new path to the same state has greater cost, donāt pursue it further
ā Leads to time/space tradeoff
ā¢ āAlgorithms that forget their history are doomed to repeat
itā [Russell and Norvig]
A
B
C
3
2
2
cycles exponentially large search trees (try it!)