1
IT406
Artificial Intelligence
Agents
Harris Chikunya
2
Intelligent Agents
• Nature of agents
• Agent environments
• Agent types
3
Agents and Environments
• An agent is anything that can perceive its environment through sensors and acts upon that environment
through effectors.
 A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to the sensors, and other organs
such as hands, legs, mouth, for effectors.
 A robotic agent replaces cameras and infrared range finders for the sensors, and various motors and actuators for
effectors.
 A software agent receives file contents, network packets, and human input (keyboard/mouse/touchscreen/voice) as
sensory inputs and acts on the environment by writing files, sending network packets, and displaying information or
generating sounds.
4
Example: A Windshield Wiper Agent
How do we design a agent that can wipe the windshields
when needed?
• Goals?
• Percepts?
• Sensors?
• Effectors?
• Actions?
• Environment?
5
Example: A Windshield Wiper Agent cont…
• Goals: Keep windshields clean & maintain visibility
• Percepts: Raining, Dirty
• Sensors: Camera (moist sensor)
• Effectors: Wipers (left, right, back)
• Actions: Off, Slow, Medium, Fast
• Environment: Inner city, freeways, highways, weather …
6
Agents Terminology
• Performance Measure of Agent: It is the criteria, which determines how successful an agent
is e.g. speed, power usage, accuracy.
• Behavior of Agent: It is the action that agent performs after any given sequence of percepts.
• Percept: It is agent’s perceptual inputs at a given instance.
• Percept Sequence: It is the history of all that an agent has perceived till date.
• Agent Function: It is a map from the precept sequence to an action.
• (degree of) Autonomy: To what extent is the agent able to make decisions and take actions
on its own
• Perception (sequence) to Action Mapping: f : P*  A
7
How is an Agent different from other software?
• Agents are autonomous, that is, they act on behalf of the user
• Agents contain some level of intelligence, from fixed rules to
learning engines that allow them to adapt to changes in the
environment
• Agents don't only act reactively, but sometimes also proactively
• Agents have social ability, that is, they communicate with the user,
the system, and other agents as required
• Agents may also cooperate with other agents to carry out more
complex tasks than they themselves can handle
• Agents may migrate from one system to another to access remote
resources or even to meet other agents
8
Rationality
• Rationality is nothing but status of being reasonable, sensible, and having
good sense of judgment.
• Rationality is concerned with expected actions and results depending upon
what the agent has perceived.
• Performing actions with the aim of obtaining useful information is an
important part of rationality.
9
Rational Agent
• An ideal rational agent is the one, which is capable of doing expected
actions to maximize its performance measure, on the basis of:
 Its percept sequence
 Its built-in knowledge base
• Rationality of an agent depends on the following:
1. The performance measures, which determine the degree of success.
2. Agent’s Percept Sequence to date.
3. The agent’s prior knowledge about the environment.
4. The actions that the agent can carry out.
• A rational agent always performs right action, where the right action
means the action that causes the agent to be most successful in the
given percept sequence.
• The problem the agent solves is characterized by Performance
Measure, Environment, Actuators, and Sensors (PEAS).
10
Exercise
• Consider the simple vacuum-cleaner agent that cleans a square if it is dirty and moves to the
other square if not . Is this a rational agent?
Artificial intelligence A morden Approach 4th edition Page 103
11
Nature of Environments
• In designing an agent the first step must always be to specify the task environment as fully as
possible.
• This is done by specifying the performance measure, the environment, and the agent’s
actuators and sensors also known as PEAS (Performance, Environment, Actuators, Sensors)
12
Example: Automated taxi driver
• PEAS description of the task environment for an automated taxi driver
Agent Type Performance
Measure
Environment Actuators Sensors
Taxi driver Safe, fast, legal,
comfortable
trip, maximise
profits,
minimize
impact on
other road
users
Roads, other
traffic, police,
pedestrians,
customers,
weather
Steering,
accelerator,
brake, signal,
horn, display,
speech
Cameras, radar,
speedometer,
GPS, engine
sensor,
accelerometer,
microphones,
touchscreen
13
Exercise
•Consider the following agent types. What are their PEAS
descriptions:
1. Medical Diagnosis system
2. Satellite image analysis system
3. Part-picking robot
4. Refinery controller
5. Interactive English Tutor
14
Properties of Task Environment
• Discrete / Continuous: If there are a limited number of distinct,
clearly defined, states of the environment, the environment is
discrete (For example, chess); otherwise it is continuous (For
example, driving).
• Observable / Partially Observable: If it is possible to determine the
complete state of the environment at each time point from the
percepts it is observable; otherwise it is only partially observable.
• Static / Dynamic: If the environment does not change while an
agent is acting, then it is static; otherwise it is dynamic.
• Single agent / Multiple agents: The environment may contain other
agents which may be of the same or different kind as that of the
agent.
15
Properties of Task Environment
• Accessible vs. inaccessible: If the agent’s sensory apparatus can have access to the complete
state of the environment, then the environment is accessible to that agent.
• Deterministic vs. Non-deterministic: If the next state of the environment is completely
determined by the current state and the actions of the agent, then the environment is
deterministic; otherwise it is non-deterministic.
• Episodic vs. Non-episodic: In an episodic environment, each episode consists of the agent
perceiving and then acting. The quality of its action depends just on the episode itself.
Subsequent episodes do not depend on the actions in the previous episodes. Episodic
environments are much simpler because the agent does not need to think ahead.
16
Exercise
• Classify the following according to the environment Properties
17
Structure of Agents
•Agent’s structure can be viewed as:
 Agent = Architecture + Agent Program
 Architecture = the machinery that an agent executes on.
 Agent Program = an implementation of an agent function.
18
Types of Agents
•Simple reflex agents;
•Model-based reflex agents;
•Goal-based agents; and
•Utility-based agents.
19
Simple reflex agents
• They choose actions only based on the current percept.
• They are rational only if a correct decision is made only on the basis
of current precept.
• Their environment is completely observable.
• Condition-Action Rule – It is a rule that maps a state (condition) to
an action.
20
Model-based reflex agents
• They use a model of the world to choose their actions
• They maintain an internal state
• Model: knowledge about “how the things happen in the world”.
• Internal State: It is a representation of unobserved aspects of current state
depending on percept history.
21
Goal-based agents
• The goal based agent has some goal which forms a basis of its actions
• It keeps track of the world state as well as a set of goals it is trying to achieve and chooses an
action that will lead to the achievement of its goals
22
Utility-based agents
• It uses a model of the world, along with a utility function that
measures its preferences among states of the world.
• It then chooses the action that leads to the best expected utility,
where expected utility is computed by averaging over all possible
outcome states weighted by the probability of the outcome
23
End

Lecture 2 Agents.pptx

  • 1.
  • 2.
    2 Intelligent Agents • Natureof agents • Agent environments • Agent types
  • 3.
    3 Agents and Environments •An agent is anything that can perceive its environment through sensors and acts upon that environment through effectors.  A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to the sensors, and other organs such as hands, legs, mouth, for effectors.  A robotic agent replaces cameras and infrared range finders for the sensors, and various motors and actuators for effectors.  A software agent receives file contents, network packets, and human input (keyboard/mouse/touchscreen/voice) as sensory inputs and acts on the environment by writing files, sending network packets, and displaying information or generating sounds.
  • 4.
    4 Example: A WindshieldWiper Agent How do we design a agent that can wipe the windshields when needed? • Goals? • Percepts? • Sensors? • Effectors? • Actions? • Environment?
  • 5.
    5 Example: A WindshieldWiper Agent cont… • Goals: Keep windshields clean & maintain visibility • Percepts: Raining, Dirty • Sensors: Camera (moist sensor) • Effectors: Wipers (left, right, back) • Actions: Off, Slow, Medium, Fast • Environment: Inner city, freeways, highways, weather …
  • 6.
    6 Agents Terminology • PerformanceMeasure of Agent: It is the criteria, which determines how successful an agent is e.g. speed, power usage, accuracy. • Behavior of Agent: It is the action that agent performs after any given sequence of percepts. • Percept: It is agent’s perceptual inputs at a given instance. • Percept Sequence: It is the history of all that an agent has perceived till date. • Agent Function: It is a map from the precept sequence to an action. • (degree of) Autonomy: To what extent is the agent able to make decisions and take actions on its own • Perception (sequence) to Action Mapping: f : P*  A
  • 7.
    7 How is anAgent different from other software? • Agents are autonomous, that is, they act on behalf of the user • Agents contain some level of intelligence, from fixed rules to learning engines that allow them to adapt to changes in the environment • Agents don't only act reactively, but sometimes also proactively • Agents have social ability, that is, they communicate with the user, the system, and other agents as required • Agents may also cooperate with other agents to carry out more complex tasks than they themselves can handle • Agents may migrate from one system to another to access remote resources or even to meet other agents
  • 8.
    8 Rationality • Rationality isnothing but status of being reasonable, sensible, and having good sense of judgment. • Rationality is concerned with expected actions and results depending upon what the agent has perceived. • Performing actions with the aim of obtaining useful information is an important part of rationality.
  • 9.
    9 Rational Agent • Anideal rational agent is the one, which is capable of doing expected actions to maximize its performance measure, on the basis of:  Its percept sequence  Its built-in knowledge base • Rationality of an agent depends on the following: 1. The performance measures, which determine the degree of success. 2. Agent’s Percept Sequence to date. 3. The agent’s prior knowledge about the environment. 4. The actions that the agent can carry out. • A rational agent always performs right action, where the right action means the action that causes the agent to be most successful in the given percept sequence. • The problem the agent solves is characterized by Performance Measure, Environment, Actuators, and Sensors (PEAS).
  • 10.
    10 Exercise • Consider thesimple vacuum-cleaner agent that cleans a square if it is dirty and moves to the other square if not . Is this a rational agent? Artificial intelligence A morden Approach 4th edition Page 103
  • 11.
    11 Nature of Environments •In designing an agent the first step must always be to specify the task environment as fully as possible. • This is done by specifying the performance measure, the environment, and the agent’s actuators and sensors also known as PEAS (Performance, Environment, Actuators, Sensors)
  • 12.
    12 Example: Automated taxidriver • PEAS description of the task environment for an automated taxi driver Agent Type Performance Measure Environment Actuators Sensors Taxi driver Safe, fast, legal, comfortable trip, maximise profits, minimize impact on other road users Roads, other traffic, police, pedestrians, customers, weather Steering, accelerator, brake, signal, horn, display, speech Cameras, radar, speedometer, GPS, engine sensor, accelerometer, microphones, touchscreen
  • 13.
    13 Exercise •Consider the followingagent types. What are their PEAS descriptions: 1. Medical Diagnosis system 2. Satellite image analysis system 3. Part-picking robot 4. Refinery controller 5. Interactive English Tutor
  • 14.
    14 Properties of TaskEnvironment • Discrete / Continuous: If there are a limited number of distinct, clearly defined, states of the environment, the environment is discrete (For example, chess); otherwise it is continuous (For example, driving). • Observable / Partially Observable: If it is possible to determine the complete state of the environment at each time point from the percepts it is observable; otherwise it is only partially observable. • Static / Dynamic: If the environment does not change while an agent is acting, then it is static; otherwise it is dynamic. • Single agent / Multiple agents: The environment may contain other agents which may be of the same or different kind as that of the agent.
  • 15.
    15 Properties of TaskEnvironment • Accessible vs. inaccessible: If the agent’s sensory apparatus can have access to the complete state of the environment, then the environment is accessible to that agent. • Deterministic vs. Non-deterministic: If the next state of the environment is completely determined by the current state and the actions of the agent, then the environment is deterministic; otherwise it is non-deterministic. • Episodic vs. Non-episodic: In an episodic environment, each episode consists of the agent perceiving and then acting. The quality of its action depends just on the episode itself. Subsequent episodes do not depend on the actions in the previous episodes. Episodic environments are much simpler because the agent does not need to think ahead.
  • 16.
    16 Exercise • Classify thefollowing according to the environment Properties
  • 17.
    17 Structure of Agents •Agent’sstructure can be viewed as:  Agent = Architecture + Agent Program  Architecture = the machinery that an agent executes on.  Agent Program = an implementation of an agent function.
  • 18.
    18 Types of Agents •Simplereflex agents; •Model-based reflex agents; •Goal-based agents; and •Utility-based agents.
  • 19.
    19 Simple reflex agents •They choose actions only based on the current percept. • They are rational only if a correct decision is made only on the basis of current precept. • Their environment is completely observable. • Condition-Action Rule – It is a rule that maps a state (condition) to an action.
  • 20.
    20 Model-based reflex agents •They use a model of the world to choose their actions • They maintain an internal state • Model: knowledge about “how the things happen in the world”. • Internal State: It is a representation of unobserved aspects of current state depending on percept history.
  • 21.
    21 Goal-based agents • Thegoal based agent has some goal which forms a basis of its actions • It keeps track of the world state as well as a set of goals it is trying to achieve and chooses an action that will lead to the achievement of its goals
  • 22.
    22 Utility-based agents • Ituses a model of the world, along with a utility function that measures its preferences among states of the world. • It then chooses the action that leads to the best expected utility, where expected utility is computed by averaging over all possible outcome states weighted by the probability of the outcome
  • 23.

Editor's Notes

  • #3 An AI system is composed of an agent and its environment. The agents act in their environment. The environment may contain other agents.
  • #4 Agents interact with environments through sensors and actuators/effectors
  • #12 The nature of environments agents operate in differ and they come in a variety of flavors. The nature of the task environment directly affects the appropriate design for the agent program
  • #13 First, what is the performance measure to which we would like our automated driver to aspire? Desirable qualities include getting to the correct destination; minimizing fuel consumption and wear and tear; minimizing the trip time or cost; minimizing violations of traffic laws and disturbances to other drivers; maximizing safety and passenger comfort; maximizing profits. Next, what is the driving environment that the taxi will face? Any taxi driver must deal with a variety of roads, ranging from rural lanes and urban alleys to 12-lane freeways. The roads contain other traffic, pedestrians, stray animals, road works, police cars, puddles, and potholes. The actuators for an automated taxi include those available to a human driver: control over the engine through the accelerator and control over steering and braking. In addition, it will need output to a display screen or voice synthesizer to talk back to the passengers, and perhaps some way to communicate with other vehicles, politely or otherwise. The basic sensors for the taxi will include one or more video cameras so that it can see, as well as lidar and ultrasound sensors to detect distances to other cars and obstacles. To avoid speeding tickets, the taxi should have a speedometer, and to control the vehicle properly, especially on curves, it should have an accelerometer. To determine the mechanical state of the vehicle, it will need the usual array of engine, fuel, and electrical system sensors. Like many human drivers, it might want to access GPS signals so that it doesn’t get lost. Finally, it will need touchscreen or voice input for the passenger to request a destination.
  • #18 So far we have talked about agents by describing behavior—the action that is performed after any given sequence of percepts. Now we talk about how the insides work. The job of AI is to design an agent program that implements the agent function—the mapping from percepts to actions. We assume this program will run on some sort of computing device with physical sensors and actuators—we call this the agent architecture:
  • #19 There four basic kinds of agent programs that embody the principles underlying almost all intelligent systems:
  • #20 Simple reflex behaviors occur even in more complex environments. Imagine yourself as the driver of the automated taxi. If the car in front brakes and its brake lights come on, then you should notice this and initiate braking. In other words, some processing is done on the visual input to establish the condition we call “The car in front is braking.” Then, this triggers some established connection in the agent program to the action “initiate braking.” if car-in-front-is-braking then initiate-braking.
  • #21 The information comes from the sensors-percepts Based on this, the agent changes the current state of the world Based on state of the world and knowledge (memory), it triggers actions through the effectors