1. TOPIC: PEAS AND PROPERTIES OF
AGENT OR TYPES OF
ENVIRONMENT.
NAME: DEEPIKA GOUDA
&
LAVANYA KATKAR .
2. WHAT IS AN INTELLIGENT AGENT?
An intelligent agent is a program
that can make decisions or
perform a service based on its
environment, user input and
experiences.
3. What is PEAS?
• PEAS stands for Performance, Environment, Actuators,
Sensors. They help define the task environment for an
intelligent agent. Hence, PEAS is an important
representation system for defining an Artificial Intelligence
model. We shall see what these terms mean individually.
A. Performance measures: These are the parameters used
to measure the performance of the agent. It also define
the agent’s success or accuracy in achieving its set goals.
4. B. Environment: It is the task environment of the agent. The
agent interacts with its environment. It takes perceptual
input from the environment and acts on the environment
using actuators.
C. Actuators: These are the means of performing calculated
actions on the environment. For a human agent; hands
and legs are the actuators.
D. Sensors: These are the means of taking the input from
the environment. For a human agent; ears, eyes, and
nose are the sensors.
5. Examples:
Agent Performance
Measure
Environment Actuator Sensors
Hospital
Management
System
Patient’s health, Admission
process, Payment
Hospital, Doctors, Patients
Prescription,
Diagnosis,
Scan report
Symptoms,
Patient’s
response
Automated Car
Drive
The comfortable trip, Safety,
Maximum Distance
Roads, Traffic, Vehicles
Steering wheel,
Accelerator,
Brake, Mirror
Camera,
GPS,
Odometer
Satellite image
analysis system
Correct image categorization
Downlink from orbiting
satellite
Display
categorization
of scene
Colour pixel
arrays
Subject Tutoring
Maximize scores,
Improvement is students
Classroom, Desk, Chair,
Board, Staff, Students
Smart displays,
Corrections
Eyes, Ears,
Notebooks
6. NATURE OF ENVIRONMENT:
An Environment in Artificial intelligence
is the surrounding of the agent.The
agent takes input from the environment
through sensors and deliver the output
to the environment through actuator.
7. PROPERTIES OF THIS AGENT OR TYPES OF ENVIRONMENT :
❏ (Observable (Fully/Partially): A fully observable environment is one
in which the agent has complete information about the current state
of the environment. e.g .chess, tic tac toe,etc.
• When an agent can’t determine the complete state of the environment
at all points of time, then it is called a partially observable
environment. e.g driving a car in traffic, playing cards.
❏ (Agents (Single/Multi): A single-agent environment is one in which a
single agent interacts with the environment to achieve its goals.
Examples of single-agent environments include puzzles and
mazes.
• A multi-agent environment is one in which multiple agents interact with
each other and the environment to achieve their individual or collective
goals. Examples of multi-agent environments include multiplayer
games.
8. ❏ (Deterministic/Stochastic): A deterministic environment
is one in which the outcome of an action is completely
predictable and can be precisely determined. Examples of
deterministic environments include simple mathematical
equations, where the outcome of each operation is
precisely defined.
• A stochastic environment is one in which the outcome of an
action is uncertain and involves probability. e.g. dice
game(snake a
❏ (Episodic/Sequential): An episodic environment is one in
which the agent's actions do not affect the future states of
the environment.e.g include games like chess.
• a sequential environment is one in which the agent's
actions affect the future states of the environment. e.g
include robotics applications or video games.
9. ❏ (Static/Semi/Dynamic): A static environment is one in which the
environment does not change over time. The state of the
environment remains constant, and the agent's actions do not affect
the environment. E.g Cleaning a room (Environment) by a dry-
cleaner reboot (Agent ) is an example of a static environment where
the room is static while cleaning.
• A dynamic environment is one in which the environment changes
over time. e.g taxi car (atmosphere).
❏ Discrete (Discrete/Continuous): A discrete environment is one in
which the state and action spaces are finite and discrete. Examples
of discrete environments include board games like chess board finite
number of boxes available.
• A continuous environment is one in which the state and action spaces
are continuous and infinite. E.g In a basketball game, the position of
players (Environment) keeps changing continuously and hitting
(Action) the ball towards the basket can have different angles and
speed so infinite possibilities.
10. ❏ (Known and Unknown): In a known environment, the output
for all probable actions is given . Examples of known
environments include chess or tic-tac-toe games.
• Obviously, in case of unknown environment, for an agent to make
a decision, it has to gain knowledge about how the environment
works. Examples of unknown environments include exploration
tasks or real-world applications.
❏ (Competitive and Collaborative): An agent is said to
competitive environment when it competes against another
agent. And In case of collaborative environment when multiple
agents cooperate to produce the desired output. e.g Games
such as GO or Chess.
• Collaborative AI environments rely on the cooperation between multiple AI
agents. Self-driving vehicles or cooperating to avoid collisions or smart
home sensors interactions are examples of collaborative AI environments.