2. Agent: Definition
• An agent is anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through actuators
• Human agent:
– Sensors: eyes, ears, and other organs
– Actuators: hands, legs, and some body parts
• Robotic agent:
– Sensors: cameras, range finders, etc.
– Actuators: levers, motors, etc.
• Softbots (s/w agents)
3. Agent Architecture
Real World Agent
Sensors
Effectors
Reasoning &
Decisions Making
Model of World
(being updated)
List of
Possible Actions
Prior Knowledge
about the World
Goals/Utility
4. Intelligent Agents
• Fundamental abilities of intelligence
– Sensing
– Understanding and reasoning
– Acting
• In order to act, it must first sense. Blind action is not intelligent action
• To sense wisely, AI system has to understand
• Agent must be autonomous
• Agent must be rational
5. Sensors and Effectors
• An agent perceives its environment through sensors
– Percept: the complete set of inputs to an agent at a given time
– The current percept or a sequence of percepts determines the action
of an agent
• An agent can change its environment through effectors or actuators
– Operation involving an actuator is an action
6. Agents and Environments
• The agent function maps from percept histories to actions:
[f: P A]
• The agent program runs on the physical architecture to
produce f
• f = agent = architecture + program
7. Agent Functions
• An agent is completely specified by the agent function that maps percept
sequences to actions
• Find a way to implement the agent function concisely
• An agent program implements the above mapping (ie, from percept sequences
to actions)
Agent’s Performance
• Behavior of Agent: In terms of agent function:
– Mapping: “Perception history into Action”
– Ideal Mapping: the sequence of actions the agent ought to take at any
point in time
• Performance of Agent: a subjective measure to characterize how successful
an agent is performing in terms of
– Power consumption, accuracy, profit, etc.
8. Setting of Agent: PEAS
• Specification of the setting for intelligent agent design
has 4 coordinates: PEAS
–Performance measure
–Environment
–Actuators
–Sensors
11. PEAS: Example 3
• Part-picking robot:
– Performance measure: Percentage of parts in correct bins
– Environment: Conveyor belt with parts, bins
– Actuators: Jointed arm and hand
– Sensors: Camera, joint angle sensors
12. PEAS: Example 4
• Interactive English tutor:
– Performance measure: Maximize student's score on test
– Environment: Set of students
– Actuators: Screen display (exercises, suggestions,
corrections)
– Sensors: Keyboard
14. Realistic Environments
• The simplest environment is
– Fully observable, deterministic, episodic, static, discrete
and single-agent.
• Most real situations are:
– Partially observable, stochastic, sequential, dynamic,
continuous and multi-agent.
15. Types of Agents
Concept
• Autonomous agent
• Rational agent
• Perfect rationality
• Bounded
rationality
Type of Agent
– Table-driven agent
– Simple reflex agent
– Model-based reflex agent
– Goal-based agent
• Problem-solving agent
– Utility-based agent
• Can distinguish between
different goals
– Learning agent
16. Autonomous Agent
• Autonomous: free, independent, sovereign, not subject to the
rule or control of another.
• An agent is autonomous if its behavior is determined by its
own experience (with ability to learn and adapt)
– An autonomous agent decides autonomously which action
to take in the current situation to maximize progress
toward a goal
17. Rational Agents
• An agent should strive to "do the right thing", based on what it
can perceive and perform the actions.
• For each possible percept sequence, a rational agent should
select an action that is expected to maximize its performance
measure using the agent has whatever built-in knowledge.
• Rationality is distinct from omniscience (all-knowing with
infinite knowledge)
18. Perfect Vs Bounded Rationality
• Perfect Rationality: Assumes that the rational agent knows all
information and takes action that maximizes its utility.
– Humans do not satisfy this definition
• Bounded Rationality: Because of the limitations of the human
mind, humans use approximate methods to handle many tasks.
19. Table-driven Agents
A table is a simple way to specify the mapping, [f: P A]
• Information comes from sensors: percepts
• Look it up in a table
• Triggers actions through effectors
• No notion of history. Action determined by current state
20. Drawbacks of Table-driven Agents
• Huge tables for mapping
– Chess needs a table with 35100 entries
• Take a long time to build the table by the designer
• No autonomy – all actions are pre-determined
• Even with learning, need a long time to learn the table entries
• Types of tables
– Rule based, Neural networks, etc
24. State-based Models (Search, Planning)
– Solutions are defined as a sequenceof steps
– Model the task as a graph of statesand solution as a
path
– A state captures all the relevant information about the past
in order to act (optimally) in the future
Applications: Navigation, Games
– State-space graphs
25. Logic-based Models (Logic)
– Implicit representation of classes of objects
– Deductive reasoning
Applications: Question answering systems, natural
language understanding
– Propositional logic, First-order logic
26. References
• Tom Markiewicz & Josh Zheng, Getting started with Artificial Intelligence,
Published by O’Reilly Media,2017
• Stuart J. Russell and Peter Norvig, Artificial Intelligence A Modern Approach