Artificial Intelligence
Instructor:
Muhammad Javaid Iqbal
Lecturer
javaid.Iqbal@superior.edu.pk
JTech Learning Channel
https://youtube.com/playlist?list=PLPKrqmxs-
DAFhdh4VbIbw8LHEJjzg86ZY
Today’s Agenda
• Agents
• Intelligent Agents
• Structure of Intelligent Agent
• Rational Agent
• Agent Types
Important Terminologies - Agent
• Sensor (Perceive): Sensor is a device which detects the
change in the environment and sends the information to
other electronic devices. An agent observes its
environment through sensors.
• Actuators (Act): Actuators are the component of
machines that converts energy into motion. The actuators
are only responsible for moving and controlling a system.
An actuator can be an electric motor, gears, rails, etc.
• Effectors (Response): Effectors are the devices which
affect the environment. Effectors can be legs, wheels,
arms, fingers, wings, fins, and display screen.
Important Terminologies - Agent
• Percept Sequence − It is the history of all that an agent
has perceived till date.
• Agent Function − It is a map from the precept sequence
to an action. A mathematical/abstract illustration.
• Agent Program - runs on the physical architecture to
produce the agent function. Concrete implementation.
• Behavior of Agent − It is the action that agent performs
after any given sequence of percept.
Agent
An agent is anything that can perceive its environment through
sensors and acts upon that environment through effectors.
For example:
• Human Agent has sensory organs such as eyes, ears,
nose, tongue and skin parallel to the sensors, and other
organs such as hands, legs, mouth, for effectors.
• Robotic Agent replaces cameras and infrared range
finders for the sensors, and various motors and actuators
for effectors.
• Software Agent has encoded bit strings as its programs
and actions.
A simple example of Agent
Intelligent Agent
An intelligent agent is an autonomous entity which
act upon an environment using sensors and
actuators for achieving goals.
An intelligent agent may learn from the environment
to achieve their goals.
Example: Thermostat
Intelligent Agent - Four Rules
• Rule 1: An AI agent must have the ability to perceive the
environment.
• Rule 2: The observation must be used to make decisions.
• Rule 3: Decision should result in an action.
• Rule 4: The action taken by an AI agent must be a rational action.
Recall - Rationality
● An agent should "do the right thing", based on what it can perceive and the actions it can
perform. The right action is the one that will cause the agent to be most successful.
● How to measure performance?
○ Using an evaluation measure which is based objective criterion for success of an
agent's behaviour
● Back to the vacuum-cleaner example
○ Amount of dirt cleaned within certain time
○ +1 credit for each clean square per unit time
● General rule: measure what one wants rather than how one thinks the agent should
behave
Rational Agent
For each possible percept sequence, a rational agent should
select an action that is expected to maximize its performance
measure, given the evidence provided by the percept sequence
and whatever built-in knowledge the agent has.
Rational Agent - Example
● A simple agent that cleans a square if it is dirty and moves
to the other square if not. Is it rational?
● Assumption:
○ performance measure: 1 point for each clean square at each time step
○ environment is known a priori
○ actions = {left, right, suck}
○ agent is able to perceive the location and dirt in that location
● Given different assumption, it might not be rational anymore
Autonomy
● The behaviour of an agent depends on its own experience
as well as the built-in knowledge of the agent instilled by
the agent designer.
● A system is autonomous if it takes actions according to its
experience.
Agents - Variants/Types
• Simple Reflex Agent
• Model Based Reflex Agents
• Goal Based Agents
• Utility Based Agents
• Learning Agents
Agents - Variants/Types - Simple Reflex Agent
● Reflex Agent works similar to the reflex action of our body
(e.g. when we immediately lift our finger when it touches
the tip of the flame).
● Just as the prompt response of our body based on the
current situation, the agent also responds based on the
current environment irrespective of the past state of the
environment. The reflex agent can work properly only if
the decisions to be made are based on the current
percept.
Agents - Variants/Types - Simple Reflex Agent
Agents - Variants/Types - Model Based Reflex Agent
● These are the agents with memory. It stores the
information about the previous state, the current state and
performs the action accordingly.
● Just as while driving, if the driver wants to change the
lane, he looks into the mirror to know the present position
of vehicles behind him. While looking in front, he can only
see the vehicles in front and as he already has the
information on the position of vehicles behind him (from
the mirror a moment ago), he can safely change the lane.
The previous and the current state get updated quickly for
deciding the action.
Agents - Variants/Types - Model Based Reflex Agent
Agents - Variants/Types - Goal Based Agent
● In some circumstances, just the information of the current
state may not help in making the right decision.
● If the goal is known, then the agent takes into account the
goal information besides the current state information to
make the right decision.
● E.g. if the agent is a self-driving car and the goal is the
destination, then the information of the route to the
destination helps the car in deciding when to turn left or
right.
Agents - Variants/Types - Goal Based Reflex Agent
Agents - Variants/Types - Utility Agent
● There can be many possible sequences to achieve the
goal but some will be better than others.
● Considering the same example mentioned above, the
destination is known but there are multiple routes.
Choosing an appropriate route also matters to the overall
success of the agent. There are many factors in deciding
the route like the shortest one, the comfortable one, etc.
The success depends on the utility of the agent-based on
user preferences.
Agents - Variants/Types - Utility Agent
Today’s Agenda
• PEAS?
• Agent Environments
PEAS
● There are multiple types/variants of agents (discussed in previous lecture).
● PEAS is a system used to categorize similar systems together.
● This system deliver performance measure with respect to environment,
actuators, and sensors of respective agent.
PEAS
Example
• Self Driving Car
• Auto Pilot
• Cooking Agent
• Teaching
Environments
Environments
• Fully observable vs Partially Observable
• Static vs Dynamic
• Discrete vs Continuous
• Deterministic vs Stochastic
• Single-agent vs Multi-agent
• Episodic vs sequential
• Known vs Unknown
• Accessible vs Inaccessible
Environments - Observation
• Fully Observable
• Partially Observable
• Non-Observable
Which environment will be easy for an agent?
Environments - Prediction
• Deterministic
• Stochastic
Which environment will be easy for
an agent?
Environments - Event
• Episodic
• Serial/Sequential
• Which environment will be easy for an agent?
Environments - Existence
• Single
• Multiple
Which environment will be easy for an
agent?
Environments -
Consistency
• Static
• Dynamic
Which environment will be
easy for an agent?
Environments - Certainty
• Known
• Unknown
Which environment will be easy for an
agent?
Environments -
Accessibility
• Accessible
• Inaccessible
Which environment will be
easy for an agent?
Thanks

W2_Lec03_Lec04_Agents.pptx

  • 1.
    Artificial Intelligence Instructor: Muhammad JavaidIqbal Lecturer javaid.Iqbal@superior.edu.pk JTech Learning Channel https://youtube.com/playlist?list=PLPKrqmxs- DAFhdh4VbIbw8LHEJjzg86ZY
  • 2.
    Today’s Agenda • Agents •Intelligent Agents • Structure of Intelligent Agent • Rational Agent • Agent Types
  • 3.
    Important Terminologies -Agent • Sensor (Perceive): Sensor is a device which detects the change in the environment and sends the information to other electronic devices. An agent observes its environment through sensors. • Actuators (Act): Actuators are the component of machines that converts energy into motion. The actuators are only responsible for moving and controlling a system. An actuator can be an electric motor, gears, rails, etc. • Effectors (Response): Effectors are the devices which affect the environment. Effectors can be legs, wheels, arms, fingers, wings, fins, and display screen.
  • 4.
    Important Terminologies -Agent • Percept Sequence − It is the history of all that an agent has perceived till date. • Agent Function − It is a map from the precept sequence to an action. A mathematical/abstract illustration. • Agent Program - runs on the physical architecture to produce the agent function. Concrete implementation. • Behavior of Agent − It is the action that agent performs after any given sequence of percept.
  • 5.
    Agent An agent isanything that can perceive its environment through sensors and acts upon that environment through effectors. For example: • Human Agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to the sensors, and other organs such as hands, legs, mouth, for effectors. • Robotic Agent replaces cameras and infrared range finders for the sensors, and various motors and actuators for effectors. • Software Agent has encoded bit strings as its programs and actions.
  • 6.
  • 7.
    Intelligent Agent An intelligentagent is an autonomous entity which act upon an environment using sensors and actuators for achieving goals. An intelligent agent may learn from the environment to achieve their goals. Example: Thermostat
  • 8.
    Intelligent Agent -Four Rules • Rule 1: An AI agent must have the ability to perceive the environment. • Rule 2: The observation must be used to make decisions. • Rule 3: Decision should result in an action. • Rule 4: The action taken by an AI agent must be a rational action.
  • 9.
    Recall - Rationality ●An agent should "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful. ● How to measure performance? ○ Using an evaluation measure which is based objective criterion for success of an agent's behaviour ● Back to the vacuum-cleaner example ○ Amount of dirt cleaned within certain time ○ +1 credit for each clean square per unit time ● General rule: measure what one wants rather than how one thinks the agent should behave
  • 10.
    Rational Agent For eachpossible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
  • 11.
    Rational Agent -Example ● A simple agent that cleans a square if it is dirty and moves to the other square if not. Is it rational? ● Assumption: ○ performance measure: 1 point for each clean square at each time step ○ environment is known a priori ○ actions = {left, right, suck} ○ agent is able to perceive the location and dirt in that location ● Given different assumption, it might not be rational anymore
  • 12.
    Autonomy ● The behaviourof an agent depends on its own experience as well as the built-in knowledge of the agent instilled by the agent designer. ● A system is autonomous if it takes actions according to its experience.
  • 13.
    Agents - Variants/Types •Simple Reflex Agent • Model Based Reflex Agents • Goal Based Agents • Utility Based Agents • Learning Agents
  • 14.
    Agents - Variants/Types- Simple Reflex Agent ● Reflex Agent works similar to the reflex action of our body (e.g. when we immediately lift our finger when it touches the tip of the flame). ● Just as the prompt response of our body based on the current situation, the agent also responds based on the current environment irrespective of the past state of the environment. The reflex agent can work properly only if the decisions to be made are based on the current percept.
  • 15.
    Agents - Variants/Types- Simple Reflex Agent
  • 16.
    Agents - Variants/Types- Model Based Reflex Agent ● These are the agents with memory. It stores the information about the previous state, the current state and performs the action accordingly. ● Just as while driving, if the driver wants to change the lane, he looks into the mirror to know the present position of vehicles behind him. While looking in front, he can only see the vehicles in front and as he already has the information on the position of vehicles behind him (from the mirror a moment ago), he can safely change the lane. The previous and the current state get updated quickly for deciding the action.
  • 17.
    Agents - Variants/Types- Model Based Reflex Agent
  • 18.
    Agents - Variants/Types- Goal Based Agent ● In some circumstances, just the information of the current state may not help in making the right decision. ● If the goal is known, then the agent takes into account the goal information besides the current state information to make the right decision. ● E.g. if the agent is a self-driving car and the goal is the destination, then the information of the route to the destination helps the car in deciding when to turn left or right.
  • 19.
    Agents - Variants/Types- Goal Based Reflex Agent
  • 20.
    Agents - Variants/Types- Utility Agent ● There can be many possible sequences to achieve the goal but some will be better than others. ● Considering the same example mentioned above, the destination is known but there are multiple routes. Choosing an appropriate route also matters to the overall success of the agent. There are many factors in deciding the route like the shortest one, the comfortable one, etc. The success depends on the utility of the agent-based on user preferences.
  • 21.
    Agents - Variants/Types- Utility Agent
  • 22.
  • 23.
    PEAS ● There aremultiple types/variants of agents (discussed in previous lecture). ● PEAS is a system used to categorize similar systems together. ● This system deliver performance measure with respect to environment, actuators, and sensors of respective agent.
  • 24.
    PEAS Example • Self DrivingCar • Auto Pilot • Cooking Agent • Teaching
  • 25.
  • 26.
    Environments • Fully observablevs Partially Observable • Static vs Dynamic • Discrete vs Continuous • Deterministic vs Stochastic • Single-agent vs Multi-agent • Episodic vs sequential • Known vs Unknown • Accessible vs Inaccessible
  • 27.
    Environments - Observation •Fully Observable • Partially Observable • Non-Observable Which environment will be easy for an agent?
  • 28.
    Environments - Prediction •Deterministic • Stochastic Which environment will be easy for an agent?
  • 29.
    Environments - Event •Episodic • Serial/Sequential • Which environment will be easy for an agent?
  • 30.
    Environments - Existence •Single • Multiple Which environment will be easy for an agent?
  • 31.
    Environments - Consistency • Static •Dynamic Which environment will be easy for an agent?
  • 32.
    Environments - Certainty •Known • Unknown Which environment will be easy for an agent?
  • 33.
    Environments - Accessibility • Accessible •Inaccessible Which environment will be easy for an agent?
  • 34.