A General Introduction to Artificial
Intelligence
Contents
What is Artificial Intelligence (AI)?
 AI related areas.
 Brief history of AI.
 Applications of AI.
 Core issues in AI
 What will be in the course?.
What is AI?
 Definitions of AI have been somewhat controversial
(because of A and because of I).
 Two main school of thoughts on what AI is: Strong AI and
Weak AI.
(see “The Artificial Minds”- MIT Press 1995 by Franklins)
Strong AI
Strong AI implication: Intelligent agents can become
sapients (human-being, self-aware).
(AI researchers in the early age and....Hollywood!!!)
Weak AI
Weak AI implication: Intelligent agents could only simulate
human-being some human being behaviors
(Widely accepted now)
Views on AI
Views on AI fall into 4 categories
Thinking Humanly Thinking Rationally
Acting Humanly Acting Rationally
The view of the course: acting rationally
Acting Humanly
 Subjected to study the human intelligence.
 1960s "cognitive revolution": information-processing
psychology.
Requires scientific theories of internal activities of the brain.
-- How to validate? Requires
1) Predicting and testing behavior of human subjects (top-
down).
or 2) Direct identification from neurological data (bottom-up).
Both approaches (roughly, Cognitive Science and Cognitive
Neuroscience) are now distinct from AI.
Acting Humanly
Two main approaches:
Top-down: Cognitive science  Symbolism.
Bottom-up: Neural and Brain Science 
Connectionism.
Acting Humanly
Turing Test (1950):
Predicted that by 2000, a machine might have a 30% chance of
fooling a lay person for 5 minutes.
Anticipated all major arguments against AI in following 50 years
Suggested major components of AI: knowledge, reasoning,
language understanding, learning.
Thinking rationally:
"laws of thought"
What are the rules (laws) of thought?
Aristole  Gorge Bool  David Hilbert = Logic
Thinking rationally
"laws of thought"
• Aristotle: what are correct arguments/thought processes?
• Several Greek schools developed various forms of logic:
notation and rules of derivation for thoughts; may or may not
have proceeded to the idea of mechanization.
• Direct line through mathematics and philosophy to modern AI
(logic-based agents).
• Problems:
1. Not all intelligent behavior is mediated by logic.
2. What is the purpose of thinking? What thoughts should I have?
Acting rationally:
rational agents
• Rational behavior: "doing the right thing".
• The right thing: that which is expected to
maximize goal achievement, given the
available information.
• Doesn't necessarily involve thinking – e.g.,
blinking reflex – but thinking should be in the
service of rational action.
Rational Agents
• An agent is an entity that perceives and acts
• This course is about designing rational agents
• Abstractly, an agent is a function from percept
histories to actions:
[f: P*  A]
• For any given class of environments and tasks, we
seek the agent (or class of agents) with the best
performance
• Caveat: computational limitations make perfect
rationality unachievable
 design best program for given machine resources.
Rational Agents
Advantages of the view:
- Intelligence does not necessary require thinking
and/or reasoning.
- Intelligence is not necessary attach to human or living
creatures.
Intelligence can be in a process.
Intelligence can be obtained by cooperation of a swarm of agents.
Rational Agents
Examples:
Evolutionary Intelligence, Swarm Intelligence.
Some Definitions of AI
from AI Books
 "The exciting new effort to make computer think …
machine with minds, in the full and literal sense"
(Haugeland, 1985).
 "Activities that we associate with human thinking,
activities , as such decision-making, problem solving,
learning" (Bellman, 1978).
Some Definitions of AI
from AI Books
 "The art of creating machines that perform functions
that require intelligence when performed by people"
(Kurzweil, 1990).
 "The study of how to make computers do things, at
the moment, people are better" (Rich and Knight,
1991).
Some Definitions of AI
from AI Books
 "The study of mental faculties through the use of
computational models" (Charniak and McDermott,
1985).
 "The study of the computations that make it possible
to perceive, reason, and act" (Winston, 1992).
Some Definitions of AI
from AI Books
 "Computational Intelligence is the study of the design
of intelligent agents" (Poole et al., 1998).
 "AI …. is concerned with intelligent behavior in
artifacts" (Nilsson, 1998) .
AI-Related Areas
 Philosophy.
 Cognitive science.
 Neuroscience and Brain Theory.
 Cybernetics and control theory.
 Mathematical Logic.
 Evolutionary Biology.
 Social Intelligence.
 Swarm Behavior.
 Organization Theory.
 Statistics.
 .......
AI History
Three stages:
Symbolism (70-80) (Automated Reasoning and Proofing, Expert
Systems, Logic Programming,...).
Connectionism (80s-90s) (Neural Networks, Statistical Learning,
Support Vector Machines, Probabilistic Graph Learning,....).
Evolutionary Computation (90s-?) (Evolutionary Programming,
Evolutionary Strategies, Genetic Algorithms) , Intelligent
Multi Agent Systems.
Abridged History of AI
1943 McCulloch & Pitts: Mô hình boole cho não bộ.
1950 Turing's "Computing Machinery and Intelligence"
1956 Dartmouth meeting: "Artificial Intelligence “ was
coined (Minsky?).
1956 Rosenblatt, Widrow and Hoff - PERCEPTRON
1950s Samuel's checker program,
Newell & Simon's Logic Theorist,
Gelernter's Geometry Engine.
1964 Evolutionary Strategies (Rechenberg et al.).
1964 Evolutionary Programming (L. Fogel).
1965 Robinson's complete algorithm for logical reasoning.
Abridged History of AI
1969 Minsky and Papert - "PERCEPTRON"
1969-79 Knowledge-based systems (Expert and Planning
Systems) - Symbolism dominant time.
1980-85 AI became an industry.
1986: Rumelhart, Hinton, Williams - Back Propangation
learning algorithm for multi-layer PERCEPTRON - the
rebirth of neural networks.
1987 AI became an science.
1986-1995 Neural Networks, Machine Learning,
Approximate Reasoning, Fuzzy Systems,...
Connectionism time.
1995 - Evolutionary Computation, Natural Computation,
Intelligent Multi-Agent Systems.
Areas/Applications in AI
 Natural Language Processing.
 Automated Reasoning.
 Knowledge-Based Systems.
 Pattern Recognition.
 Computer Vision.
 Speech Processing.
 Data Mining and Knowledge Discovery.
 Intelligent Planning.
 Intelligent Computer Games.
 Multi-agent Systems.
 Evolutionary and Natural Computation.
 Artificial Life.
 ........
State of The Art
 Deep Blue defeated the reigning world chess champion Garry
Kasparov in 1997
 MYCIN (1984, Standford).
 Proved a mathematical conjecture (Robbins conjecture)
unsolved for decades.
 During the 1991 Gulf War, US forces deployed an AI logistics
planning and scheduling program that involved up to 50,000
vehicles, cargo, and people
 Gulf War 2 (2003), Artificial War.
 NASA's on-board autonomous planning program controlled
the scheduling of operations for a spacecraft.
 New washing machine generation using NeuroFuzzy
Technology.
 Human identification through eyes detection and analysis at
Heathrow airport using evolutionary computation technique.
 ........
Core Issue in AI
 Representation.
 Reasoning.
 Learning.
 Interaction.
Intelligent Agents
Outline
• Agents and environments
• Rationality
• PEAS (Performance measure, Environment,
Actuators, Sensors)
• Environment types
• Agent types
Agents
• An agent is anything that can be viewed as perceiving
its environment through sensors and acting upon
that environment through actuators
• Human agent: eyes, ears, and other organs for
sensors; hands,
• legs, mouth, and other body parts for actuators.
• Robotic agent: cameras and infrared range finders for
sensors;
• various motors for actuators.
Agents and Environments
• The agent function maps from percept histories to
actions:
[f: P*  A]
• The agent program runs on the physical architecture
to produce f
• agent = architecture + program
Vacuum-Cleaner World
• Percepts: location and contents, e.g., [A,Dirty]
• Actions: Left, Right, Suck, NoOp
A Vacuum-Cleaner Agent
Percept sequence Action
[A, Clean] Right
[A,Dirty] Suck
[B, Clean] Left
[B, Dirty] Suck
[A, Clean], [A, Clean] Right
[A, Clean], [A, Dirty] Suck
…. ….
[A, Clean], [A, Clean], [A, Clean] Right
[A, Clean], [A, Clean], [A, Dirty] Suck
…. ….
Rational Agents
• An agent should strive to "do the right thing", based on
what it can perceive and the actions it can perform. The
right action is the one that will cause the agent to be
most successful.
• Performance measure: An objective criterion for
success of an agent's behavior.
• E.g., performance measure of a vacuum-cleaner agent
could be amount of dirt cleaned up, amount of time
taken, amount of electricity consumed, amount of
noise generated, etc.
Rational Agents
• Rational Agent: For each possible percept sequence,
a rational agent should select an action that is
expected to maximize its performance measure,
given the evidence provided by the percept
sequence and whatever built-in knowledge the agent
has.
Rational Agents
• Rationality is distinct from omniscience (all-
knowing with infinite knowledge)
• Agents can perform actions in order to modify
future percepts so as to obtain useful
information (information gathering, exploration)
• An agent is autonomous if its behavior is
determined by its own experience (with ability
to learn and adapt)
PEAS
• PEAS: Performance measure, Environment,
Actuators, Sensors
• Must first specify the setting for intelligent agent
design.
• Consider, e.g., the task of designing an automated
taxi driver:
– Performance measure
– Environment
– Actuators
– Sensors
PEAS
• Must first specify the setting for intelligent agent
design:
• Consider, e.g., the task of designing an automated taxi
driver:
– Performance measure: Safe, fast, legal, comfortable trip,
maximize profits.
– Environment: Roads, other traffic, pedestrians, customers.
– Actuators: Steering wheel, accelerator, brake, signal, horn.
– Sensors: Cameras, sonar, speedometer, GPS, odometer,
engine sensors, keyboard.
PEAS
• Agent: Medical diagnosis system
• Performance measure: Healthy patient, minimize
costs, lawsuits
• Environment: Patient, hospital, staff
• Actuators: Screen display (questions, tests, diagnoses,
treatments, referrals)
• Sensors: Keyboard (entry of symptoms, findings,
patient's answers)
PEAS
• Agent: Part-picking robot
• Performance measure: Percentage of parts in correct
bins
• Environment: Conveyor belt with parts, bins
• Actuators: Jointed arm and hand
• Sensors: Camera, joint angle sensors
PEAS
• Agent: Interactive English tutor
• Performance measure: Maximize student's score on
test
• Environment: Set of students
• Actuators: Screen display (exercises, suggestions,
corrections)
• Sensors: Keyboard
Environment Types
• Fully observable (vs. partially observable): An agent's sensors
give it access to the complete state of the environment at each
point in time.
• Deterministic (vs. stochastic): The next state of the environment
is completely determined by the current state and the action
executed by the agent. (If the environment is deterministic
except for the actions of other agents, then the environment is
strategic).
• Episodic (vs. sequential): The agent's experience is divided into
atomic "episodes" (each episode consists of the agent perceiving
and then performing a single action), and the choice of action in
each episode depends only on the episode itself.
Environment Types
• Static (vs. dynamic): The environment is unchanged
while an agent is deliberating. (The environment is
semidynamic if the environment itself does not
change with the passage of time but the agent's
performance score does)
• Discrete (vs. continuous): A limited number of
distinct, clearly defined percepts and actions.
• Single agent (vs. multiagent): An agent operating by
itself in an environment.
Environment Types
Chess with Chess without Taxi driving
a clock a clock
Fully observable Yes Yes No
Deterministic StrategicStrategicNo
Episodic No No No
Static Semi Yes No
Discrete Yes Yes No
Single agent No No No
• The environment type largely determines the agent design
• The real world is (of course) partially observable, stochastic,
sequential, dynamic, continuous, multi-agent
Agent Functions and Programs
• An agent is completely specified by the agent function
mapping percept sequences to actions
• One agent function (or a small equivalence class) is
rational
• Aim: find a way to implement the rational agent
function concisely
Table-lookup Agent
• Drawbacks:
– Huge table
– Take a long time to build the table
– No autonomy
– Even with learning  long time to learn the table entries
Function TABLE-DRIVEN-AGENT(percept) return an action
Static: percepts, a sequence, initially empty.
table, a table of actions, indexed by percept sequences, initially
fully specified.
append percept to the end of percepts
action  LOOKUP(percepts, table)
return action
Agent Program for
A Vacuum-Cleaner Agent
Function REFLEX-VACUUM-AGENT (location, status)
Rerurn an action
if status=Dirty then Suck
else if location=A then return Right
else if location=B then return Left
Agent Types
Four basic types in order of increasing generality:
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents
Simple Reflex Agents
Simple Reflex Agents
Function SIMPLE-REFLEX-AGENT (percept)
Return an action
Static: rules, a set of condition-action rules
state  INTERPRET-INPUT (percept)
rule  RULE-MATCH (state, rules)
action  RULE-ACTION [rule];
return action
Model-Based Reflex Agents
Model-based Reflex Agents
Function REFLEX-AGENT-WITH-STATE (percept)
Return an action
Static state, a description of the current world state
rules, a set of condition-action rules
action, the most recent action, initially none
state  UPDATE-STATE (state, action, percept)
rule  RULE-MATCH (state, rules)
action  RULE-ACTION [rule]
return action
Goal-based Agents
Utility-based Agents
Learning Agents

Lecture 1.ppt

  • 1.
    A General Introductionto Artificial Intelligence
  • 2.
    Contents What is ArtificialIntelligence (AI)?  AI related areas.  Brief history of AI.  Applications of AI.  Core issues in AI  What will be in the course?.
  • 3.
    What is AI? Definitions of AI have been somewhat controversial (because of A and because of I).  Two main school of thoughts on what AI is: Strong AI and Weak AI. (see “The Artificial Minds”- MIT Press 1995 by Franklins)
  • 4.
    Strong AI Strong AIimplication: Intelligent agents can become sapients (human-being, self-aware). (AI researchers in the early age and....Hollywood!!!)
  • 5.
    Weak AI Weak AIimplication: Intelligent agents could only simulate human-being some human being behaviors (Widely accepted now)
  • 6.
    Views on AI Viewson AI fall into 4 categories Thinking Humanly Thinking Rationally Acting Humanly Acting Rationally The view of the course: acting rationally
  • 7.
    Acting Humanly  Subjectedto study the human intelligence.  1960s "cognitive revolution": information-processing psychology. Requires scientific theories of internal activities of the brain. -- How to validate? Requires 1) Predicting and testing behavior of human subjects (top- down). or 2) Direct identification from neurological data (bottom-up). Both approaches (roughly, Cognitive Science and Cognitive Neuroscience) are now distinct from AI.
  • 8.
    Acting Humanly Two mainapproaches: Top-down: Cognitive science  Symbolism. Bottom-up: Neural and Brain Science  Connectionism.
  • 9.
    Acting Humanly Turing Test(1950): Predicted that by 2000, a machine might have a 30% chance of fooling a lay person for 5 minutes. Anticipated all major arguments against AI in following 50 years Suggested major components of AI: knowledge, reasoning, language understanding, learning.
  • 10.
    Thinking rationally: "laws ofthought" What are the rules (laws) of thought? Aristole  Gorge Bool  David Hilbert = Logic
  • 11.
    Thinking rationally "laws ofthought" • Aristotle: what are correct arguments/thought processes? • Several Greek schools developed various forms of logic: notation and rules of derivation for thoughts; may or may not have proceeded to the idea of mechanization. • Direct line through mathematics and philosophy to modern AI (logic-based agents). • Problems: 1. Not all intelligent behavior is mediated by logic. 2. What is the purpose of thinking? What thoughts should I have?
  • 12.
    Acting rationally: rational agents •Rational behavior: "doing the right thing". • The right thing: that which is expected to maximize goal achievement, given the available information. • Doesn't necessarily involve thinking – e.g., blinking reflex – but thinking should be in the service of rational action.
  • 13.
    Rational Agents • Anagent is an entity that perceives and acts • This course is about designing rational agents • Abstractly, an agent is a function from percept histories to actions: [f: P*  A] • For any given class of environments and tasks, we seek the agent (or class of agents) with the best performance • Caveat: computational limitations make perfect rationality unachievable  design best program for given machine resources.
  • 14.
    Rational Agents Advantages ofthe view: - Intelligence does not necessary require thinking and/or reasoning. - Intelligence is not necessary attach to human or living creatures. Intelligence can be in a process. Intelligence can be obtained by cooperation of a swarm of agents.
  • 15.
  • 16.
    Some Definitions ofAI from AI Books  "The exciting new effort to make computer think … machine with minds, in the full and literal sense" (Haugeland, 1985).  "Activities that we associate with human thinking, activities , as such decision-making, problem solving, learning" (Bellman, 1978).
  • 17.
    Some Definitions ofAI from AI Books  "The art of creating machines that perform functions that require intelligence when performed by people" (Kurzweil, 1990).  "The study of how to make computers do things, at the moment, people are better" (Rich and Knight, 1991).
  • 18.
    Some Definitions ofAI from AI Books  "The study of mental faculties through the use of computational models" (Charniak and McDermott, 1985).  "The study of the computations that make it possible to perceive, reason, and act" (Winston, 1992).
  • 19.
    Some Definitions ofAI from AI Books  "Computational Intelligence is the study of the design of intelligent agents" (Poole et al., 1998).  "AI …. is concerned with intelligent behavior in artifacts" (Nilsson, 1998) .
  • 20.
    AI-Related Areas  Philosophy. Cognitive science.  Neuroscience and Brain Theory.  Cybernetics and control theory.  Mathematical Logic.  Evolutionary Biology.  Social Intelligence.  Swarm Behavior.  Organization Theory.  Statistics.  .......
  • 21.
    AI History Three stages: Symbolism(70-80) (Automated Reasoning and Proofing, Expert Systems, Logic Programming,...). Connectionism (80s-90s) (Neural Networks, Statistical Learning, Support Vector Machines, Probabilistic Graph Learning,....). Evolutionary Computation (90s-?) (Evolutionary Programming, Evolutionary Strategies, Genetic Algorithms) , Intelligent Multi Agent Systems.
  • 22.
    Abridged History ofAI 1943 McCulloch & Pitts: Mô hình boole cho não bộ. 1950 Turing's "Computing Machinery and Intelligence" 1956 Dartmouth meeting: "Artificial Intelligence “ was coined (Minsky?). 1956 Rosenblatt, Widrow and Hoff - PERCEPTRON 1950s Samuel's checker program, Newell & Simon's Logic Theorist, Gelernter's Geometry Engine. 1964 Evolutionary Strategies (Rechenberg et al.). 1964 Evolutionary Programming (L. Fogel). 1965 Robinson's complete algorithm for logical reasoning.
  • 23.
    Abridged History ofAI 1969 Minsky and Papert - "PERCEPTRON" 1969-79 Knowledge-based systems (Expert and Planning Systems) - Symbolism dominant time. 1980-85 AI became an industry. 1986: Rumelhart, Hinton, Williams - Back Propangation learning algorithm for multi-layer PERCEPTRON - the rebirth of neural networks. 1987 AI became an science. 1986-1995 Neural Networks, Machine Learning, Approximate Reasoning, Fuzzy Systems,... Connectionism time. 1995 - Evolutionary Computation, Natural Computation, Intelligent Multi-Agent Systems.
  • 24.
    Areas/Applications in AI Natural Language Processing.  Automated Reasoning.  Knowledge-Based Systems.  Pattern Recognition.  Computer Vision.  Speech Processing.  Data Mining and Knowledge Discovery.  Intelligent Planning.  Intelligent Computer Games.  Multi-agent Systems.  Evolutionary and Natural Computation.  Artificial Life.  ........
  • 25.
    State of TheArt  Deep Blue defeated the reigning world chess champion Garry Kasparov in 1997  MYCIN (1984, Standford).  Proved a mathematical conjecture (Robbins conjecture) unsolved for decades.  During the 1991 Gulf War, US forces deployed an AI logistics planning and scheduling program that involved up to 50,000 vehicles, cargo, and people  Gulf War 2 (2003), Artificial War.  NASA's on-board autonomous planning program controlled the scheduling of operations for a spacecraft.  New washing machine generation using NeuroFuzzy Technology.  Human identification through eyes detection and analysis at Heathrow airport using evolutionary computation technique.  ........
  • 26.
    Core Issue inAI  Representation.  Reasoning.  Learning.  Interaction.
  • 27.
  • 28.
    Outline • Agents andenvironments • Rationality • PEAS (Performance measure, Environment, Actuators, Sensors) • Environment types • Agent types
  • 29.
    Agents • An agentis anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators • Human agent: eyes, ears, and other organs for sensors; hands, • legs, mouth, and other body parts for actuators. • Robotic agent: cameras and infrared range finders for sensors; • various motors for actuators.
  • 30.
    Agents and Environments •The agent function maps from percept histories to actions: [f: P*  A] • The agent program runs on the physical architecture to produce f • agent = architecture + program
  • 31.
    Vacuum-Cleaner World • Percepts:location and contents, e.g., [A,Dirty] • Actions: Left, Right, Suck, NoOp
  • 32.
    A Vacuum-Cleaner Agent Perceptsequence Action [A, Clean] Right [A,Dirty] Suck [B, Clean] Left [B, Dirty] Suck [A, Clean], [A, Clean] Right [A, Clean], [A, Dirty] Suck …. …. [A, Clean], [A, Clean], [A, Clean] Right [A, Clean], [A, Clean], [A, Dirty] Suck …. ….
  • 33.
    Rational Agents • Anagent should strive to "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful. • Performance measure: An objective criterion for success of an agent's behavior. • E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc.
  • 34.
    Rational Agents • RationalAgent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
  • 35.
    Rational Agents • Rationalityis distinct from omniscience (all- knowing with infinite knowledge) • Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration) • An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt)
  • 36.
    PEAS • PEAS: Performancemeasure, Environment, Actuators, Sensors • Must first specify the setting for intelligent agent design. • Consider, e.g., the task of designing an automated taxi driver: – Performance measure – Environment – Actuators – Sensors
  • 37.
    PEAS • Must firstspecify the setting for intelligent agent design: • Consider, e.g., the task of designing an automated taxi driver: – Performance measure: Safe, fast, legal, comfortable trip, maximize profits. – Environment: Roads, other traffic, pedestrians, customers. – Actuators: Steering wheel, accelerator, brake, signal, horn. – Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard.
  • 38.
    PEAS • Agent: Medicaldiagnosis system • Performance measure: Healthy patient, minimize costs, lawsuits • Environment: Patient, hospital, staff • Actuators: Screen display (questions, tests, diagnoses, treatments, referrals) • Sensors: Keyboard (entry of symptoms, findings, patient's answers)
  • 39.
    PEAS • Agent: Part-pickingrobot • Performance measure: Percentage of parts in correct bins • Environment: Conveyor belt with parts, bins • Actuators: Jointed arm and hand • Sensors: Camera, joint angle sensors
  • 40.
    PEAS • Agent: InteractiveEnglish tutor • Performance measure: Maximize student's score on test • Environment: Set of students • Actuators: Screen display (exercises, suggestions, corrections) • Sensors: Keyboard
  • 41.
    Environment Types • Fullyobservable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time. • Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic). • Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself.
  • 42.
    Environment Types • Static(vs. dynamic): The environment is unchanged while an agent is deliberating. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does) • Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. • Single agent (vs. multiagent): An agent operating by itself in an environment.
  • 43.
    Environment Types Chess withChess without Taxi driving a clock a clock Fully observable Yes Yes No Deterministic StrategicStrategicNo Episodic No No No Static Semi Yes No Discrete Yes Yes No Single agent No No No • The environment type largely determines the agent design • The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent
  • 44.
    Agent Functions andPrograms • An agent is completely specified by the agent function mapping percept sequences to actions • One agent function (or a small equivalence class) is rational • Aim: find a way to implement the rational agent function concisely
  • 45.
    Table-lookup Agent • Drawbacks: –Huge table – Take a long time to build the table – No autonomy – Even with learning  long time to learn the table entries Function TABLE-DRIVEN-AGENT(percept) return an action Static: percepts, a sequence, initially empty. table, a table of actions, indexed by percept sequences, initially fully specified. append percept to the end of percepts action  LOOKUP(percepts, table) return action
  • 46.
    Agent Program for AVacuum-Cleaner Agent Function REFLEX-VACUUM-AGENT (location, status) Rerurn an action if status=Dirty then Suck else if location=A then return Right else if location=B then return Left
  • 47.
    Agent Types Four basictypes in order of increasing generality: • Simple reflex agents • Model-based reflex agents • Goal-based agents • Utility-based agents
  • 48.
  • 49.
    Simple Reflex Agents FunctionSIMPLE-REFLEX-AGENT (percept) Return an action Static: rules, a set of condition-action rules state  INTERPRET-INPUT (percept) rule  RULE-MATCH (state, rules) action  RULE-ACTION [rule]; return action
  • 50.
  • 51.
    Model-based Reflex Agents FunctionREFLEX-AGENT-WITH-STATE (percept) Return an action Static state, a description of the current world state rules, a set of condition-action rules action, the most recent action, initially none state  UPDATE-STATE (state, action, percept) rule  RULE-MATCH (state, rules) action  RULE-ACTION [rule] return action
  • 52.
  • 53.
  • 54.