Intelligent Agents
1AI, Subash Chandra Pakhrin
Instructional Objective
• Define an agent
• Define an intelligent agent
• Define a Rational agent
• Explain Bounded rationality
• Discuss different types of environment
• Explain different agent architectures
2AI, Subash Chandra Pakhrin
Instructional Objective
On completion of this lesson the student will be able to
• Understand what an agent is and how an agent
interacts with the environment.
• Given a problem situation, the student should be able
to
– Identify the percepts available to the agent and
– the actions that the agent can execute.
• Understand the performance measures used to
evaluate an agent
• Understand the definition of a rational agent
• Understand the concept of bounded rationality
3AI, Subash Chandra Pakhrin
Instructional Objective
• On completion of this lesson the student will
– Be familiar with
• Different agent architectures
• Stimulus response agents
• State based agents
• Deliberative / goal – directed agents
• Utility based agents
• Learning agents
– Be able to analyze a problem situation and be able to
• Identify the characteristics of the environment
• Recommend the architecture of the desired agent
4AI, Subash Chandra Pakhrin
Agent and Environment
Percepts: Hear, see / inputs
Actions: actuators / effectors
5AI, Subash Chandra Pakhrin
Agents
• Operate in an environment
• Perceives its environment through sensors
• Acts upon its environment through
actuators/effectors
• Have goals
6AI, Subash Chandra Pakhrin
Sensors and effectors
• An agent perceives its environment through
sensors
– The complete set of inputs at a given time is called a
percept
– The current percept, or a sequence of percepts can
influence the actions of an agent
• It can change the environment through effectors
– An operation involving an actuator is called an action
– Actions can be grouped into action sequences
7AI, Subash Chandra Pakhrin
Agents
• Have sensors, actuators
• Have goals
• Implement mapping from
percept sequence to
actions
• Performance measure to
evaluate agents
• Autonomous agent decide
autonomously which
action to take in the
current situation to
maximize progress towards
its goals.
8AI, Subash Chandra Pakhrin
Performance
• Behavior and performance of IAs in terms of
agent function
– Perception history (sequence) to Action Mapping
– Ideal Mapping: specifies which actions an agent to
take at any point in time
• Performance measure: a subjective measure
to characterize how successful an agent is
(e.g., speed, power usage, accuracy, money,
etc)
9AI, Subash Chandra Pakhrin
Examples of Agent
• Humans
– Eyes, ears, skin, taste buds, etc. for sensors
• Robots
– Camera, infrared, bumper, etc. for sensors
– Grippers, wheels, lights, speakers, etc. for
actuators
• Software agent (soft bots)
– Functions as sensors
– Functions as actuators
10AI, Subash Chandra Pakhrin
Types of Agents: Robots
http://www.ai.mit.edu/projects/humanoid-robotics-group/cog/overview.html
Cog (MIT) 11AI, Subash Chandra Pakhrin
Types of Agents: Robots
https://www.youtube.com/watch?v=8t8fyiiQVZ0
The AIBO Entertainment
Robot is a totally new
kind of robot-
autonomous, sensitive to
his environment, and able
to learn and mature like a
living creature. Since each
AIBO experiences his
world differently, each
develops his own unique
personality – different
from any other AIBO in
the world!
Aibo (SONY)
12AI, Subash Chandra Pakhrin
Types of Agents
• Soft bots
– Askjeeves.com
• Expert System
– Cardiologist
• Autonomous spacecraft
• Intelligent buildings
13AI, Subash Chandra Pakhrin
Agents
• Fundamental faculties of intelligence
– Acting
– Sensing
– Understanding, reasoning, learning
• In order to act you must sense. Blind actions is
not a characterization of intelligence.
• Robotics: sensing and acting, understanding
not necessary.
• Sensing needs understanding to be useful.
14AI, Subash Chandra Pakhrin
Intelligent Agents
• Intelligent Agents
– Must sense
– Must act
– Must be autonomous (to some extent),
– Must be rational.
15AI, Subash Chandra Pakhrin
Rational Agent
• AI is about building rational agents
• An agent is something that perceives and acts.
• A rational agent always does the right thing.
– What are the functionalities (goals) ?
– What are the components ?
– How do we build them ?
16AI, Subash Chandra Pakhrin
Rationality
• Perfect Rationality
– Assumes that the rational agent knows all and will
take the action that maximizes his/her utility.
– Equivalent to demanding that the agent is Omniscient.
– Human beings do not satisfy this definition of
rationality.
• Bounded Rationality Herbert Simon, 1972 (CMU)
– Because of the limitations of the human kind, humans
must use approximate methods to handle many tasks.
17AI, Subash Chandra Pakhrin
Rationality
• Rational Action: The action that maximizes the
expected value of the performance measure
given the percept sequence to date
– Rational = Best ?
• Yes, to the best of its knowledge
– Rational = Optimal ?
• Yes, to the best of its abilities
• And its constraints
18AI, Subash Chandra Pakhrin
Omniscience
• A rational agent is not omniscient
– It doesn’t know the actual outcome of its actions
– It may not know certain aspects of its
environment.
• Rationality must take into account the
limitations of the agent
– Percept sequence, background knowledge,
feasible actions
– Deal with the expected outcome of actions
19AI, Subash Chandra Pakhrin
Bounded Rationality
• Evolution did not give rise to optimal agents,
but to agents which are in some senses locally
optimal at best.
• In 1957, Simon proposed the notion of
Bounded Rationality:
that property of an agent that behaves in a
manner that is nearly optimal with respect to
its goals as its resources will allow.
20AI, Subash Chandra Pakhrin
Agent Environment
• Environments in which agents operate can be
defined in different ways.
It is helpful to view the following definitions as
referring to the way the environment appears
from the point of view of the agent itself.
21AI, Subash Chandra Pakhrin
Environment: Observability
• Fully observable
– All of the environment relevant to the action being
considered is observable
– Such environments are convenient, since the agent is
freed from the task of keeping track of the change in
the environment.
• Partially observable
– The relevant features of the environment are only
partially observable
• Example:
– Fully obs: Chess; Partially obs: Poker
22AI, Subash Chandra Pakhrin
Environment: Determinism
• Deterministic: The next state of the environment
is completely described by the current state and
the agent’s action. Image analysis
• Stochastic: If an element of interference or
uncertainty occurs then the environment is
stochastic. Note that a deterministic yet partially
observable environment will appear to be
stochastic to the agent. Ludo
• Strategic: environment state wholly determined
by the preceding state and the actions of multiple
agents is called strategic. Chess
23AI, Subash Chandra Pakhrin
Environment: Episodicity
• Episodic / Sequential
– An episodic environment means that subsequent
episodes do not depend on what actions occurred
in pervious episodes.
– In a sequential environment, the agent engages in
a series of connected episodes.
24AI, Subash Chandra Pakhrin
Environment: Dynamism
• Static Environment: does not change from one
state to the next while the agent is considering its
course of action. The only changes to the
environment as those caused by the agent itself
• Dynamic Environment: Changes over time
independent of the actions of the agent – and
thus if an agent does not respond in a timely
manner, this counts as a choice to do nothing
– Interactive tutor
25AI, Subash Chandra Pakhrin
Environments: Continuity
• Discrete / Continuous
– If the number of distinct percepts and actions is
limited, the environment is discrete, otherwise it
is continuous.
26AI, Subash Chandra Pakhrin
Environments: other agents
• Single agent/ Multi-agent
– If the environment contains other intelligent
agents, the agent needs to be concerned about
strategic, game – theoretic aspects of the
environment (for either cooperative or
competitive agents)
– Most engineering environments don’t have multi-
agent properties, whereas most social and
economic systems get their complexity from the
interactions of (more or less) rational agents.
27AI, Subash Chandra Pakhrin
Complex Environments
• Complexity of the environment includes
– Knowledge rich: enormous amount of information that the
environment contains and
– Input rich: the enormous amount of input the
environment can send to an agent. T
• The agent must have a way of manage this complexity.
Often such considerations lead to the development of
– Sensing strategies and
– Attentional mechanisms
• So that the agent may more readily focus its efforts in
such rich environments.
28AI, Subash Chandra Pakhrin
Table based agent
• Information comes from sensors – percepts
• Look it up !
• Triggers actions through the effectors
• In table based agent the mapping from percepts to
actions is stored in the form of table
Reactive agents
No notion of history, the current state is the sensors see it
right now.
Percepts Actions
29AI, Subash Chandra Pakhrin
Table based agent
• a table is simple way to specify a mapping from
percepts to actions
– Tables may become very large
– All work done by the designer
– No autonomy, all actions are predetermined
– Learning might take a very long time
• Mapping is implicitly defined by a program
– Rule based
– Neural networks
– algorithms
30AI, Subash Chandra Pakhrin
Percept based agent
• Information comes from sensors – percepts
• Changes the agents current state of the world
• Triggers action through the effectors
Reactive agents
Stimulus – response agents
No notion of history, the current state is as the
sensors see it right now.
31AI, Subash Chandra Pakhrin
Subsumption Architecture
• Rodney Brooks, 1986
• Sensory inputs – action (lower animals)
• Brooks – follow the evolutionary path and build
simple agents for complex worlds.
• Features
– No explicit knowledge representation
– Distributed behavior (not centralized)
– Response to stimuli is reflexive
– Bottom up design – complex behaviors fashioned from
the combination of simpler underlying ones.
– Inexpensive individual agents
32AI, Subash Chandra Pakhrin
Subsumption Architecture
• Subsumption Architecture built in layers.
• Time scale of evolution – 5 billion years (cells)
– First humans – 2.5 million years
– Symbols – 5000 years
• Different layers of behavior
• Higher layers can override lower layers.
• Each activity consists of a Finite State Machine
(FSM) (47:04)
33AI, Subash Chandra Pakhrin
Mobile Robot Example
• Layer 0: Avoid Obstacles
– Sonar: generate sonar scan
– Collide: send HALT message
to forward
– Feel force: signal sent to
run-away, turn
• Layer 1: Wander Behavior
– Generates a random
heading
– Avoid reads repulsive force,
generates new heading,
feeds to turn and forward
34AI, Subash Chandra Pakhrin
Mobile Robot Example
• Layer 2: Exploration
behavior
– Whenlook notices idle
time and looks for an
interesting place.
– Pathplan sends new
direction to avoid.
– Integrate monitors
path and sends them
to the path plan.
35AI, Subash Chandra Pakhrin
Percept based Agent
• Efficient
• No internal representation for reasoning,
interference.
• No strategic planning, learning.
• Percept-based agents are not good for
multiple, opposing, goals.
36AI, Subash Chandra Pakhrin
State based Agent
• Information comes from sensors-percepts
• Changes the agents current state of the world
• Based on state of the world and knowledge
(memory), it triggers actions through the
effectors
37AI, Subash Chandra Pakhrin
Goal-based Agent
• Information comes from sensors-percepts
• Changes the agents current state of the world
• Based on state of the world and knowledge (memory)
and goals/intentions, it chooses actions and does them
through the effectors.
• Agent’s actions will depend upon its goal.
• Goal formulation based on the current situation is a
way of solving many problems and search is a universal
problem solving mechanism in AI.
• The sequence of steps required to solve a problem is
not known a priori and must be determined by a
systematic exploration of the alternatives.
38AI, Subash Chandra Pakhrin
Utility based agent
• A more general framework
• Different preferences for different goals
• A utility function maps a state or a sequence
of states to a real valued utility.
• The agent acts so as to maximize expected
utility.
39AI, Subash Chandra Pakhrin
Learning Agent
• Learning allows an agent to operate in initially
unknown environments
• The learning element modifies the
performance element
• Learning is required for true autonomy
40AI, Subash Chandra Pakhrin
Summary
• An agent perceives and acts in an
environment, has an architecture, and is
implemented by an agent program.
• An ideal agent always chooses the action
which maximizes its expected performance,
given its percepts sequence so far.
• An autonomous agent uses its own experience
rather than built-in knowledge of the
environment by the designer.
41AI, Subash Chandra Pakhrin
Summary
• An agent program maps from percept to action and
updates its internal state.
– Reflex agents respond immediately to percepts.
– Goal-based agents act in order to achieve their goal(s).
– Utility-based agents maximize their own utility function.
• Representing Knowledge is important for successful
agent design.
• The most challenging environments are partially
observable, stochastic, sequential, dynamic, and
continuous, and contain multiple intelligent agents.
42AI, Subash Chandra Pakhrin
Questions
1. Define an agent.
2. What is a rational agent ?
3. What is bounded rationality ?
4. What is an autonomous agent ?
5. Describe the salient features of an agent.
43AI, Subash Chandra Pakhrin
Questions
6. Find out about the Mars rover.
a. What are the percepts for this agent ?
b. Characterize the operating environment.
c. What are the actions the agent an take ?
d. How can one evaluate the performance of the agent
?
e. What sort of agent architecture do you think is most
suitable for this agent ?
7. Answer the same questions as above for an
Internet shopping agent.
44AI, Subash Chandra Pakhrin

Intelligent agents (bsc csit) lec 2

  • 1.
  • 2.
    Instructional Objective • Definean agent • Define an intelligent agent • Define a Rational agent • Explain Bounded rationality • Discuss different types of environment • Explain different agent architectures 2AI, Subash Chandra Pakhrin
  • 3.
    Instructional Objective On completionof this lesson the student will be able to • Understand what an agent is and how an agent interacts with the environment. • Given a problem situation, the student should be able to – Identify the percepts available to the agent and – the actions that the agent can execute. • Understand the performance measures used to evaluate an agent • Understand the definition of a rational agent • Understand the concept of bounded rationality 3AI, Subash Chandra Pakhrin
  • 4.
    Instructional Objective • Oncompletion of this lesson the student will – Be familiar with • Different agent architectures • Stimulus response agents • State based agents • Deliberative / goal – directed agents • Utility based agents • Learning agents – Be able to analyze a problem situation and be able to • Identify the characteristics of the environment • Recommend the architecture of the desired agent 4AI, Subash Chandra Pakhrin
  • 5.
    Agent and Environment Percepts:Hear, see / inputs Actions: actuators / effectors 5AI, Subash Chandra Pakhrin
  • 6.
    Agents • Operate inan environment • Perceives its environment through sensors • Acts upon its environment through actuators/effectors • Have goals 6AI, Subash Chandra Pakhrin
  • 7.
    Sensors and effectors •An agent perceives its environment through sensors – The complete set of inputs at a given time is called a percept – The current percept, or a sequence of percepts can influence the actions of an agent • It can change the environment through effectors – An operation involving an actuator is called an action – Actions can be grouped into action sequences 7AI, Subash Chandra Pakhrin
  • 8.
    Agents • Have sensors,actuators • Have goals • Implement mapping from percept sequence to actions • Performance measure to evaluate agents • Autonomous agent decide autonomously which action to take in the current situation to maximize progress towards its goals. 8AI, Subash Chandra Pakhrin
  • 9.
    Performance • Behavior andperformance of IAs in terms of agent function – Perception history (sequence) to Action Mapping – Ideal Mapping: specifies which actions an agent to take at any point in time • Performance measure: a subjective measure to characterize how successful an agent is (e.g., speed, power usage, accuracy, money, etc) 9AI, Subash Chandra Pakhrin
  • 10.
    Examples of Agent •Humans – Eyes, ears, skin, taste buds, etc. for sensors • Robots – Camera, infrared, bumper, etc. for sensors – Grippers, wheels, lights, speakers, etc. for actuators • Software agent (soft bots) – Functions as sensors – Functions as actuators 10AI, Subash Chandra Pakhrin
  • 11.
    Types of Agents:Robots http://www.ai.mit.edu/projects/humanoid-robotics-group/cog/overview.html Cog (MIT) 11AI, Subash Chandra Pakhrin
  • 12.
    Types of Agents:Robots https://www.youtube.com/watch?v=8t8fyiiQVZ0 The AIBO Entertainment Robot is a totally new kind of robot- autonomous, sensitive to his environment, and able to learn and mature like a living creature. Since each AIBO experiences his world differently, each develops his own unique personality – different from any other AIBO in the world! Aibo (SONY) 12AI, Subash Chandra Pakhrin
  • 13.
    Types of Agents •Soft bots – Askjeeves.com • Expert System – Cardiologist • Autonomous spacecraft • Intelligent buildings 13AI, Subash Chandra Pakhrin
  • 14.
    Agents • Fundamental facultiesof intelligence – Acting – Sensing – Understanding, reasoning, learning • In order to act you must sense. Blind actions is not a characterization of intelligence. • Robotics: sensing and acting, understanding not necessary. • Sensing needs understanding to be useful. 14AI, Subash Chandra Pakhrin
  • 15.
    Intelligent Agents • IntelligentAgents – Must sense – Must act – Must be autonomous (to some extent), – Must be rational. 15AI, Subash Chandra Pakhrin
  • 16.
    Rational Agent • AIis about building rational agents • An agent is something that perceives and acts. • A rational agent always does the right thing. – What are the functionalities (goals) ? – What are the components ? – How do we build them ? 16AI, Subash Chandra Pakhrin
  • 17.
    Rationality • Perfect Rationality –Assumes that the rational agent knows all and will take the action that maximizes his/her utility. – Equivalent to demanding that the agent is Omniscient. – Human beings do not satisfy this definition of rationality. • Bounded Rationality Herbert Simon, 1972 (CMU) – Because of the limitations of the human kind, humans must use approximate methods to handle many tasks. 17AI, Subash Chandra Pakhrin
  • 18.
    Rationality • Rational Action:The action that maximizes the expected value of the performance measure given the percept sequence to date – Rational = Best ? • Yes, to the best of its knowledge – Rational = Optimal ? • Yes, to the best of its abilities • And its constraints 18AI, Subash Chandra Pakhrin
  • 19.
    Omniscience • A rationalagent is not omniscient – It doesn’t know the actual outcome of its actions – It may not know certain aspects of its environment. • Rationality must take into account the limitations of the agent – Percept sequence, background knowledge, feasible actions – Deal with the expected outcome of actions 19AI, Subash Chandra Pakhrin
  • 20.
    Bounded Rationality • Evolutiondid not give rise to optimal agents, but to agents which are in some senses locally optimal at best. • In 1957, Simon proposed the notion of Bounded Rationality: that property of an agent that behaves in a manner that is nearly optimal with respect to its goals as its resources will allow. 20AI, Subash Chandra Pakhrin
  • 21.
    Agent Environment • Environmentsin which agents operate can be defined in different ways. It is helpful to view the following definitions as referring to the way the environment appears from the point of view of the agent itself. 21AI, Subash Chandra Pakhrin
  • 22.
    Environment: Observability • Fullyobservable – All of the environment relevant to the action being considered is observable – Such environments are convenient, since the agent is freed from the task of keeping track of the change in the environment. • Partially observable – The relevant features of the environment are only partially observable • Example: – Fully obs: Chess; Partially obs: Poker 22AI, Subash Chandra Pakhrin
  • 23.
    Environment: Determinism • Deterministic:The next state of the environment is completely described by the current state and the agent’s action. Image analysis • Stochastic: If an element of interference or uncertainty occurs then the environment is stochastic. Note that a deterministic yet partially observable environment will appear to be stochastic to the agent. Ludo • Strategic: environment state wholly determined by the preceding state and the actions of multiple agents is called strategic. Chess 23AI, Subash Chandra Pakhrin
  • 24.
    Environment: Episodicity • Episodic/ Sequential – An episodic environment means that subsequent episodes do not depend on what actions occurred in pervious episodes. – In a sequential environment, the agent engages in a series of connected episodes. 24AI, Subash Chandra Pakhrin
  • 25.
    Environment: Dynamism • StaticEnvironment: does not change from one state to the next while the agent is considering its course of action. The only changes to the environment as those caused by the agent itself • Dynamic Environment: Changes over time independent of the actions of the agent – and thus if an agent does not respond in a timely manner, this counts as a choice to do nothing – Interactive tutor 25AI, Subash Chandra Pakhrin
  • 26.
    Environments: Continuity • Discrete/ Continuous – If the number of distinct percepts and actions is limited, the environment is discrete, otherwise it is continuous. 26AI, Subash Chandra Pakhrin
  • 27.
    Environments: other agents •Single agent/ Multi-agent – If the environment contains other intelligent agents, the agent needs to be concerned about strategic, game – theoretic aspects of the environment (for either cooperative or competitive agents) – Most engineering environments don’t have multi- agent properties, whereas most social and economic systems get their complexity from the interactions of (more or less) rational agents. 27AI, Subash Chandra Pakhrin
  • 28.
    Complex Environments • Complexityof the environment includes – Knowledge rich: enormous amount of information that the environment contains and – Input rich: the enormous amount of input the environment can send to an agent. T • The agent must have a way of manage this complexity. Often such considerations lead to the development of – Sensing strategies and – Attentional mechanisms • So that the agent may more readily focus its efforts in such rich environments. 28AI, Subash Chandra Pakhrin
  • 29.
    Table based agent •Information comes from sensors – percepts • Look it up ! • Triggers actions through the effectors • In table based agent the mapping from percepts to actions is stored in the form of table Reactive agents No notion of history, the current state is the sensors see it right now. Percepts Actions 29AI, Subash Chandra Pakhrin
  • 30.
    Table based agent •a table is simple way to specify a mapping from percepts to actions – Tables may become very large – All work done by the designer – No autonomy, all actions are predetermined – Learning might take a very long time • Mapping is implicitly defined by a program – Rule based – Neural networks – algorithms 30AI, Subash Chandra Pakhrin
  • 31.
    Percept based agent •Information comes from sensors – percepts • Changes the agents current state of the world • Triggers action through the effectors Reactive agents Stimulus – response agents No notion of history, the current state is as the sensors see it right now. 31AI, Subash Chandra Pakhrin
  • 32.
    Subsumption Architecture • RodneyBrooks, 1986 • Sensory inputs – action (lower animals) • Brooks – follow the evolutionary path and build simple agents for complex worlds. • Features – No explicit knowledge representation – Distributed behavior (not centralized) – Response to stimuli is reflexive – Bottom up design – complex behaviors fashioned from the combination of simpler underlying ones. – Inexpensive individual agents 32AI, Subash Chandra Pakhrin
  • 33.
    Subsumption Architecture • SubsumptionArchitecture built in layers. • Time scale of evolution – 5 billion years (cells) – First humans – 2.5 million years – Symbols – 5000 years • Different layers of behavior • Higher layers can override lower layers. • Each activity consists of a Finite State Machine (FSM) (47:04) 33AI, Subash Chandra Pakhrin
  • 34.
    Mobile Robot Example •Layer 0: Avoid Obstacles – Sonar: generate sonar scan – Collide: send HALT message to forward – Feel force: signal sent to run-away, turn • Layer 1: Wander Behavior – Generates a random heading – Avoid reads repulsive force, generates new heading, feeds to turn and forward 34AI, Subash Chandra Pakhrin
  • 35.
    Mobile Robot Example •Layer 2: Exploration behavior – Whenlook notices idle time and looks for an interesting place. – Pathplan sends new direction to avoid. – Integrate monitors path and sends them to the path plan. 35AI, Subash Chandra Pakhrin
  • 36.
    Percept based Agent •Efficient • No internal representation for reasoning, interference. • No strategic planning, learning. • Percept-based agents are not good for multiple, opposing, goals. 36AI, Subash Chandra Pakhrin
  • 37.
    State based Agent •Information comes from sensors-percepts • Changes the agents current state of the world • Based on state of the world and knowledge (memory), it triggers actions through the effectors 37AI, Subash Chandra Pakhrin
  • 38.
    Goal-based Agent • Informationcomes from sensors-percepts • Changes the agents current state of the world • Based on state of the world and knowledge (memory) and goals/intentions, it chooses actions and does them through the effectors. • Agent’s actions will depend upon its goal. • Goal formulation based on the current situation is a way of solving many problems and search is a universal problem solving mechanism in AI. • The sequence of steps required to solve a problem is not known a priori and must be determined by a systematic exploration of the alternatives. 38AI, Subash Chandra Pakhrin
  • 39.
    Utility based agent •A more general framework • Different preferences for different goals • A utility function maps a state or a sequence of states to a real valued utility. • The agent acts so as to maximize expected utility. 39AI, Subash Chandra Pakhrin
  • 40.
    Learning Agent • Learningallows an agent to operate in initially unknown environments • The learning element modifies the performance element • Learning is required for true autonomy 40AI, Subash Chandra Pakhrin
  • 41.
    Summary • An agentperceives and acts in an environment, has an architecture, and is implemented by an agent program. • An ideal agent always chooses the action which maximizes its expected performance, given its percepts sequence so far. • An autonomous agent uses its own experience rather than built-in knowledge of the environment by the designer. 41AI, Subash Chandra Pakhrin
  • 42.
    Summary • An agentprogram maps from percept to action and updates its internal state. – Reflex agents respond immediately to percepts. – Goal-based agents act in order to achieve their goal(s). – Utility-based agents maximize their own utility function. • Representing Knowledge is important for successful agent design. • The most challenging environments are partially observable, stochastic, sequential, dynamic, and continuous, and contain multiple intelligent agents. 42AI, Subash Chandra Pakhrin
  • 43.
    Questions 1. Define anagent. 2. What is a rational agent ? 3. What is bounded rationality ? 4. What is an autonomous agent ? 5. Describe the salient features of an agent. 43AI, Subash Chandra Pakhrin
  • 44.
    Questions 6. Find outabout the Mars rover. a. What are the percepts for this agent ? b. Characterize the operating environment. c. What are the actions the agent an take ? d. How can one evaluate the performance of the agent ? e. What sort of agent architecture do you think is most suitable for this agent ? 7. Answer the same questions as above for an Internet shopping agent. 44AI, Subash Chandra Pakhrin