2. What is AI?
Views of AI fall into four categories:
Thinking humanly Thinking rationally
Acting humanly Acting rationally
The textbook advocates "acting rationally"
3. Artificial Intelligence
• Definition:
The art of creating machines that perform
functions that require intelligence when
performed by people.
The study of how to make computers do
things at which at moment people are better.
The study of the computation that make it
possible to preceive,reason and act.
4. Acting humanly: Turing Test
• Turing (1950) "Computing machinery and intelligence":
• "Can machines think?" "Can machines behave intelligently?"
• Operational test for intelligent behavior: the Imitation Game
• Anticipated all major arguments against AI in following 50 years
• Suggested major components of AI: knowledge, reasoning, language
understanding, learning
5. Capabilities of computer
• Natural language processing: to enable it to
communicate successfully in English.
• Knowledge representation: to share what it knows
or hears.
• Automated reasoning: to use the stored information
to answer question and to draw new conclusions;
• Machine learning: to adapt to new circumstances
and to detect and extrapolate patterns.
6. Total Turing Test
The computer will need
* Computer vision: to perceive objects.
* Robotics: to manipulate objects and move.
7. Thinking humanly: cognitive modeling
• 1960s "cognitive revolution": information-processing
psychology
• Requires scientific theories of internal activities of
the brain
• -- How to validate? Requires
1) Predicting and testing behavior of human subjects (top-
down)
or 2) Direct identification from neurological data (bottom-
up)
• Both approaches (roughly, Cognitive Science and
Cognitive Neuroscience) are now distinct from AI.
8. • Cognitive science is the interdisciplinary scientific
study of how information concerning faculties such
as perception(sense), language, reasoning, and
emotion, is represented and transformed in a (human
or other animal) nervous system or machine (e.g.,
computer).
9. Thinking rationally: "laws of thought"
Eg:
Statement:
Socrates is a Man.
All men are mortal.
Conclusion:
Socrates is Mortal.
• These laws of thought were supposed to govern the
operation of the mind ,their study initiated the field
called logic.
10. Acting rationally: rational agent
• Rational behavior: doing the right thing
• The right thing: that which is expected to maximize
goal achievement, given the available information.
• An agent is something that acts.
• Computer agent are expected to have other attributes,
- Operating under autonomous control,
- Perceiving their environment,
- Adapting new change.
11. Rational agents
• An agent is an entity that perceives and acts.
• A rational agent is one that acts so as to achieve the
best outcomes .
• Making correct inference is a part of being a rational
agent.
• To act rationally is to reason logically to the
conclusion that a given action will achieve one’s goal.
12. Applications
• Autonomous planning & scheduling:
NASA’s remote Agent program-control the
operation of spacecraft.
• Game playing:
IBM’s Deep Blue-program to defeat in chess
match.
• Autonomous control:
AlVINN- computer vision system to steer a car.
13. • Diagnosis:
Medical diagnosis program based on
probabilistic analysis-to perform at the level of
an expert physician.
• Logistics Planning:
U.S forces deployed a Dynamic Analysis and
Replanning Tool (DART) to do automated
logistics planning and scheduling for
transportation.
14. logistics planning
• Logistics - (military definition) The science of
planning and carrying out the movement and
maintenance of forces.... those aspects of military
operations that deal with the design and development,
storage, movement, distribution;
maintenance,movement, evacuation, and
hospitalization of personnel;
maintenance, operation and disposition of facilities;
15. • Robotics:
Many surgeons now use robot assistants in
micro surgery- creates 3D model of a patient’s
internal anatomy.
• Language understanding and problem
solving:
PROVERB-computer program that solves
crossword puzzle-word filters.
16. Agents
• An agent is anything that can be viewed as perceiving
its environment through sensors and acting upon
that environment through actuators
• Human agent: eyes, ears, and other organs for
sensors; hands, legs, mouth, and other body parts
for actuators.
• Robotic agent: cameras and infrared range finders for
sensors;
• various motors for actuators
17. Agents and environments
• The agent function maps from percept histories to
actions:
[f: P* A]
• The agent program runs on the physical architecture
to produce f
• agent = architecture + program
20. Rational agents
• An agent should strive to "do the right thing", based
on what it can perceive and the actions it can
perform. The right action is the one that will cause
the agent to be most successful
• Performance measure: An objective criterion for
success of an agent's behavior
21. • E.g., performance measure of a vacuum-
cleaner agent
• could be
1.amount of dirt cleaned up,
2.amount of time taken,
3.amount of electricity consumed,
4.amount of noise generated., etc
22. Rational agents
• Agents can perform actions in order to modify
future percepts so as to obtain useful
information (information gathering,
exploration)
• An agent is autonomous if its behavior is
determined by its own experience (with ability
to learn and adapt)
23. PEAS
• PEAS: Performance measure, Environment,
Actuators, Sensors
• Must first specify the setting for intelligent agent
design
• e.g., the task of designing an automated taxi driver:
– Performance measure
– Environment
– Actuators
– Sensors
24. PEAS
• Must first specify the setting for intelligent agent
design
• e.g., the task of designing an automated taxi driver:
– Performance measure: Safe, fast, legal, comfortable trip,
maximize profits
– Environment: Roads, other traffic.
– Actuators: Steering wheel, accelerator, brake, signal, horn.
– Sensors: Cameras, sonar, speedometer, GPS, odometer,
engine sensors, keyboard.
26. Environment types
• Fully observable (vs. partially observable): An
agent's sensors give it access to the complete
state of the environment (If sensor detect all
aspect).
An environment is partially observable b’coz of
noisy and inaccurate sensors. (vacuum agent)
27. • Deterministic (vs. stochastic): The next state of
the environment is completely determined by
the current state.(vacuum)
If the environment is partially observed it is
stochastic.
(If the environment is deterministic except for
the actions of other agents, then the
environment is strategic)
28. • Episodic (vs. sequential):
The agent's experience is divided into
atomic "episodes" (each episode consists of
the agent perceiving and then performing a
single action).
In sequential the current state affect all
future decision.
Eg: chess and taxi driving
29. • Static (vs. dynamic): The environment is
unchanged while an agent is
deliberating.(puzzle)
Dynamic- continuously asking the agent what
I wants to do next.(taxi driving)
(The environment is semidynamic if the
environment itself does not change with the
passage of time but the agent's performance
score does-chess)
30. • Discrete (vs. continuous): A limited number of
distinct, clearly defined percepts and
actions.(chess)
Continuous –Taxi driving(speed and location of
the taxi)
• Single agent (vs. multiagent): An agent
operating by itself in an environment.
32. Agent functions and programs
• Agent function:
Agent=architecture+Program
• Agent program:which takes the current
percepts as input
• Agent function:which takes the entire percept
history.
33. Agent types
Four basic types in order of increasing
generality:
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents
34. Simple reflex agents
• Select actions on the basis of the current
percept, ignoring the rest of the percept
histroy.
• Eg:
Vacuum agent
• Its decision is based only on the current
location and on whether that contains dirt.
35. Simple Reflex agent
Function REFLEX_VACUUM-AGENT[location,status]returns an
action
If status=Dirty then return suck
else if location=A then return Right
else if location=B then return Left
37. Simple reflex agents
fnuction SIMPLE-REFLEX-AGENT(percept)returns an action
static rules,a set of condition-action rules
state INTERPRET-INPUT(percept)
rule RULE-MATCH(state,rules)
action RULE-ACTION[rule]
return action
38. Model-based reflex agents
• Partial observability-maintain some sort of
internal state,depends on percept
history,reflects some unobserved aspect.
2 kind of knowledge:
1)How world evolves independently of the
agent.
2)How the agent’s own action affect the world.
“How the world works”-Model of the world
40. Model-based reflex agents
fnuction REFLEX-AGENT-WITH STATE(percept)returns an
action
static: state,a description of the current world state
rules,a set of condition-action rules
action ,the most recent action,initially none
state UPDATE-STATE(state,action,percept)
rule RULE-MATCH(state,rules)
action RULE-ACTION[rule]
return action
41. Goal-based agents
• With current state description, the agent
needs goal information.
Eg: taxi driving- passenger’s destination.
• Goal based action is straight forward.
• “What will happen if I do such and such?”
43. Utility-based agents
• Goal alone are not enough to generate high
quality
• Goal just provide distinction between “happy”
and “unhappy”.
Two kind of cases where goals are inadequate:
• Conflicting goals-some of them can be
achieved.
• Several goals-none can be acieved,importance
of goals are taken.
45. Learning agents
• It allows the agent to operate in unknown
environment.
4 components:
1) Learning element.
2) Performance element.
3) Critic.
4) Problem generator.
46. • Learning element:
Responsible for making improvements.
• Performance element:
Responsible for selecting external action.
(entire agent)-Percepts and decides on action.
• Critic:
Learning element uses feedback from critic-
how the agent ids doing, how the
performance element should be modified to
do better in future.
49. Eg: Taxi driving
Driving action- Performance element
Observes the world –critic to learning element.
Formulate rule(bad action)-learning element.
Modified by installing the new rule
-Performance element.
Identify area of behavior in need of improvement
& suggest experiment
-Problem generator
52. Formulating problem
• On holiday in Romania; currently in Arad.
• Flight leaves tomorrow to Bucharest
• Formulate goal:
– be in Bucharest
• Formulate problem:
– states: various cities
– actions: drive between cities
• Find solution:
– sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
56. • states? locations of tiles
• actions? move blank left, right, up, down
• goal test? = goal state (given)
• path cost? 1 per move
57. Real world problem
• Touring problem.
• Traveling sales man problem.
• VLSI. (component connections on chip, minimize-area,
circuit delay, capacitance, maximize-manufacturing yield)
• Automatic assemble problem. (assemble the parts
of some objects, protein design-to find sequence of amino
acid)
• Internet searching. (looking for answer to questions)
63. Search strategies
• A search strategy is defined by picking the order of node
expansion
• Strategies are evaluated along the following dimensions:
– completeness: does it always find a solution if one exists?
– time complexity: number of nodes generated
– space complexity: maximum number of nodes in memory
– optimality: does it always find a least-cost solution?
Fringe:
Collection of nodes. Each element in the fringe is a leaf node, a node
with no successors in the tree.
64. Uninformed search strategies
• Uninformed search strategies use only the
information available in the problem
definition(Blind search)
• Breadth-first search.
• Uniform-cost search.
• Depth-first search.
• Depth-limited search.
• Iterative deepening search.
• Bidirectional search.
65. Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors go at
end
66.
67. • Complete? Yes (if b is finite)
• Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)
• Space? O(bd+1) (keeps every node in memory)
• Optimal? Yes (if cost = 1 per step)
• Space is the bigger problem (more than time)
b- branching factor,
d – depth of the shallowest goal.
m – maximum length of any path.
O – optimal
68. Uniform-cost search
• Expand least-cost unexpanded node
• Implementation:
– fringe = queue ordered by path cost.
• Expands the node n with lowest path cost ,
if all step cost are equal , this is identical to
breadth-first search.
69. Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
70.
71.
72.
73. Properties of depth-first search
• Complete? No: fails in infinite-depth spaces, spaces
with loops
– Modify to avoid repeated states along path
complete in finite spaces
• Time? O(bm): terrible if m is much larger than d
– but if solutions are dense, may be much faster than
breadth-first
• Space? O(bm), i.e., linear space!
• Optimal? No
81. Bidirectional Search
• Two simultaneous search, one forward from
initial state and another backward from goal ,
stopping when two searches meet in the
middle.
83. Admissible heuristics
E.g., for the 8-puzzle:
h(n) = estimated cost from n to goal
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)
• h1(S) = ? 8
• h2(S) = ? 3+1+2+2+2+3+3+2 = 18
84. Relaxed problems
• A problem with fewer restrictions on the actions is
called a relaxed problem
• The cost of an optimal solution to a relaxed problem
is an admissible heuristic for the original problem
• If the rules of the 8-puzzle are relaxed so that a tile
can move anywhere, then h1(n) gives the shortest
solution
• If the rules are relaxed so that a tile can move to any
adjacent square, then h2(n) gives the shortest
solution
85. Local search algorithms
• In many optimization problems, one or more path for
exploring , when a goal is found , the path to that
goal also constitutes a solution to the problem.
• In such cases, we can use local search algorithms.
• keep a single "current" state, rather than multiple
path.
• To understand LSA, consider state space landscape.
• It has both “Location & Elevation”.
86. • Global Minimum- if elevation corresponds to
cost.
• Global Maximum – if elevation corresponds to
an objective function.(highest peak).
87. Example: n-queens
• Put n queens on an n × n board with no two
queens on the same row, column, or diagonal
90. Hill-climbing search: 8-queens problem
• h = number of pairs of queens that are attacking each other, either directly or
indirectly
• h = 17 for the above state
92. Simulated annealing search
• Idea: escape local maxima by allowing some "bad"
moves but gradually decrease their frequency.
Eg: Ping pong game.
Properties of simulated annealing search:
• One can prove: If T decreases slowly enough, then
simulated annealing search will find a global
optimum with probability approaching 1.
93. Local beam search
• Keep track of k states rather than just one.
• Start with k randomly generated states.
• At each iteration, all the successors of all k states are
generated.
• If any one is a goal state, stop; else select the k best
successors from the complete list and repeat.
94. Genetic algorithms
• A successor state is generated by combining two parent states
• Start with k randomly generated states (population)
• A state is represented as a string over a finite alphabet (often
a string of 0s and 1s)
• Evaluation function (fitness function). Higher values for better
states.
• Produce the next generation of states by selection, crossover,
and mutation