SlideShare a Scribd company logo
1 of 67
INTELLIGENT AGENTS
Agent and Environment
Environment
Agent
percepts
actions
?
Sensors
Effectors
Agent and Environment
• Anything that can be viewed as perceiving its
environment through sensors and acting upon
that environment through its effectors/actuators.
• Example:
• Human agent
• Robotic agent
• Software agent
Simple Terms -- [PAGE]
• Percept
• Agent’s perceptual inputs at any given instant
• Percept sequence
• Complete history of everything that the agent has ever perceived.
• Action
• An operation involving an actuator
• Actions can be grouped into action sequences
A Windshield Wiper Agent
How do we design a agent that can wipe the windshields
when needed?
• Goals?
• Percepts?
• Sensors?
• Effectors?
• Actions?
• Environment?
A Windshield Wiper Agent (Cont’d)
• Goals: Keep windshields clean & maintain visibility
• Percepts: Raining, Dirty
• Sensors: Camera (moist sensor)
• Effectors: Wipers (left, right, back)
• Actions: Off, Slow, Medium, Fast
• Environment: Inner city, highways, weather
Interacting Agents
Collision Avoidance Agent (CAA)
• Goals: Avoid running into obstacles
• Percepts ?
• Sensors?
• Effectors ?
• Actions ?
• Environment: Freeway
Lane Keeping Agent (LKA)
• Goals: Stay in current lane
• Percepts ?
• Sensors?
• Effectors ?
• Actions ?
• Environment: Freeway
Interacting Agents
Collision Avoidance Agent (CAA)
• Goals: Avoid running into obstacles
• Percepts: Obstacle distance, velocity, trajectory
• Sensors: Vision, proximity sensing
• Effectors: Steering Wheel, Accelerator, Brakes, Horn, Headlights
• Actions: Steer, speed up, brake, blow horn, signal (headlights)
• Environment: Highway
Lane Keeping Agent (LKA)
• Goals: Stay in current lane
• Percepts: Lane center, lane boundaries
• Sensors: Vision
• Effectors: Steering Wheel, Accelerator, Brakes
• Actions: Steer, speed up, brake
• Environment: Highway
Agent function & program
• Agent’s behavior is mathematically described
by
• Agent function
• A function mapping any given percept sequence to
an action
• Practically it is described by
• An agent program
• The real implementation
Vacuum-cleaner world
• Perception: Clean or Dirty? where it is in?
• Actions: Move left, Move right, suck, do nothing
Vacuum-cleaner world
Program implements the agent function
Function Reflex-Vacuum-Agent([location,statuse]) return
an action
If status = Dirty then return Suck
else if location = A then return Right
else if location = B then return left
Agents
• Have sensors, actuators, goals
• Agent program
• Implements mapping from percept sequences to actions
• Performance measure to evaluate agents
• Autonomous agent decide autonomously which action
to take in the current situation to maximize the
progress towards its goals.
Behavior and performance of Agents in
terms of agent function
• Perception (sequence) to Action Mapping:
• Ideal mapping: specifies which actions an agent ought to take at any
point in time
• Description: Look-Up-Table
• Performance measure: a subjective measure to
characterize how successful an agent is (e.g., speed, power
usage, accuracy, money, etc.)
• (degree of) Autonomy: to what extent is the agent able to
make decisions and take actions on its own?
Performance measure
• A general rule:
• Design performance measures according to
• What one actually wants in the environment
• Rather than how one thinks the agent should behave
• E.g., in vacuum-cleaner world
• We want the floor clean, no matter how the agent behave
• We don’t restrict how the agent behaves
Agents
• Fundamental faculties of intelligence
• Acting
• Sensing
• Understanding, reasoning and learning
• In order to act you must sense.
• Robotics: Sensing and acting, understanding is
not necessary
Intelligent Agents
• Must sense
• Must act
• Must be autonomous
• Must be rational
Rational Agent
• AI is about building rational agents
• A rational agent always does the right thing.
• What are the functionalities?
• What are the components?
• How do we build them?
How is an Agent different from other
software?
• Agents are autonomous, that is, they act on behalf of the
user
• Agents contain some level of intelligence, from fixed
rules to learning engines that allow them to adapt to
changes in the environment
• Agents don't only act reactively, but sometimes also
proactively
How is an Agent different from other
software?
• Agents have social ability, that is, they communicate
with the user, the system, and other agents as required
• Agents may also cooperate with other agents to carry out
more complex tasks than they themselves can handle
• Agents may migrate from one system to another to
access remote resources or even to meet other agents
Rationality
• What is rational at any given time depends on four things:
• The performance measure defining the criterion of success
• The agent’s prior knowledge of the environment
• The actions that the agent can perform
• The agents’s percept sequence up to now
Rational agent
• For each possible percept sequence,
• an rational agent should select
• an action expected to maximize its performance measure, given the
evidence provided by the percept sequence and whatever built-in
knowledge the agent has
• E.g., an exam
• Maximize marks, based on
the questions on the paper & your knowledge
Example of a rational agent
• Performance measure
• Awards one point for each clean square
• at each time step, over a lifetime of 10000 time steps
• Prior knowledge about the environment
• The geography of the environment
• Only two squares
• The effect of the actions
Example of a rational agent
• Actions that can perform
• Left, Right, Suck and No Op
• Percept sequences
• Where is the agent?
• Whether the location contains dirt?
• Under this circumstance, the agent is rational.
• An omniscient agent
• Knows the actual outcome of its actions in
advance
• No other possible outcomes
• However, impossible in real world
Omniscience
• Based on the circumstance, it is rational.
• As rationality maximizes
• Expected performance
• Perfection maximizes
• Actual performance
• Hence rational agents are not omniscient.
Omniscience
Learning
• Does a rational agent depend on only
current percept?
• No, the past percept sequence should also
be used
• This is called learning
• After experiencing an episode, the agent
• should adjust its behaviors to perform better for
the same job next time.
Autonomy
• If an agent just relies on the prior knowledge of its
designer rather than its own percepts then the agent
lacks autonomy
A rational agent should be autonomous- it should learn
what it can to compensate for partial or incorrect prior
knowledge.
Nature of Environments
• Task environments are the problems
• While the rational agents are the solutions
• Specifying the task environment through PEAS
• In designing an agent, the first step must always be to specify the
task environment as fully as possible.
• Eg: Automated taxi driver
Task environments
• Performance measure
• How can we judge the automated driver?
• Which factors are considered?
• getting to the correct destination
• minimizing fuel consumption
• minimizing the trip time and/or cost
• minimizing the violations of traffic laws
• maximizing the safety and comfort, etc.
• Environment
• A taxi must deal with a variety of roads
• Traffic lights, other vehicles, pedestrians, stray
animals, road works, police cars, etc.
• Interact with the customer
Task environments
• Actuators (for outputs)
• Control over the accelerator, steering, gear shifting
and braking
• A display to communicate with the customers
• Sensors (for inputs)
• Detect other vehicles, road situations
• GPS (Global Positioning System)
• Odometer, engine sensors……
Task environments
Properties of task environments
• Fully observable vs. Partially observable
• If an agent’s sensors give it access to the complete
state of the environment at each point in time then the
environment is fully observable
• An environment might be Partially observable because
of noisy and inaccurate sensors or because parts of
the state are simply missing from the sensor data.
• Fully observable environments are convinient because the agent
need not manitain any internal state to keep track of the world.
• Single agent VS. multiagent
• Playing a crossword puzzle – single agent
• Chess playing – two agents
• Competitive multiagent environment
• Chess playing
• Cooperative multiagent environment
• Automated taxi driver
• Avoiding collision
Properties of task environments
• Deterministic vs. stochastic
• next state of the environment Completely determined by the
current state and the actions executed by the agent, then the
environment is deterministic, otherwise, it is Stochastic.
• Environment is uncertain if it is not fully observable or not
deterministic
• Outcomes are quantified in terms of probability
-taxi driver is Stochastic
- Vacuum cleaner may be deterministic or stochastic
Properties of task environments
• Episodic vs. sequential
• An episode = agent’s single pair of perception & action
• The quality of the agent’s action does not depend on
other episodes
• Every episode is independent of each other
• Episodic environment is simpler
• The agent does not need to think ahead
• Sequential
• Current action may affect all future decisions
-Ex. Taxi driving and chess.
Properties of task environments
• Static vs. dynamic
• A dynamic environment is always changing over
time
• E.g., the number of people in the street
• While static environment
• E.g., the destination
• Semidynamic
• environment is not changed over time
• but the agent’s performance score does
• E.g., chess when played with a clock
Properties of task environments
• Discrete vs. continuous
• If there are a limited number of distinct states, clearly
defined percepts and actions, the environment is
discrete
• E.g., Chess game, Taxi driving
Properties of task environments
Properties of task environments
• Known vs. unknown
• This distinction refers not to the environment itslef but to
the agent’s (or designer’s) state of knowledge about the
environment.
• In known environment, the outcomes for all actions are given. (
example: solitaire card games).
• If the environment is unknown, the agent will have to learn how it
works in order to make good decisions.( example: new video
game).
• Fully observable vs. Partially observable
• Single agent VS. multiagent
• Deterministic vs. stochastic
• Episodic vs. sequential
• Static vs. dynamic
• Discrete vs. continuous
• Known vs. unknown
Properties of task environments
Examples of task environments
Characteristics of environments
Fully
observable?
Deterministic? Episodic? Static? Discrete? Single
agent?
Solitaire
Backgammon
Taxi driving
Internet
shopping
Medical
diagnosis
Characteristics of environments
Accessible Deterministic Episodic Static Discrete? Single
agent?
Solitaire No Yes Yes Yes Yes Yes
Backgammon
Taxi driving
Internet
shopping
Medical
diagnosis
Characteristics of environments
Fully
observable?
Deterministic? Episodic? Static? Discrete? Single
agent?
Solitaire No Yes Yes Yes Yes Yes
Backgammon Yes No No Yes Yes No
Taxi driving
Internet
shopping
Medical
diagnosis
Characteristics of environments
Fully
observable?
Deterministic? Episodic? Static? Discrete? Single
agent?
Solitaire No Yes Yes Yes Yes Yes
Backgammon Yes No No Yes Yes No
Taxi driving No No No No No No
Internet
shopping
Medical
diagnosis
Characteristics of environments
Fully
observable?
Deterministic? Episodic? Static? Discrete? Single
agent?
Solitaire No Yes Yes Yes Yes Yes
Backgammon Yes No No Yes Yes No
Taxi driving No No No No No No
Internet
shopping
No No No No Yes No
Medical
diagnosis
Characteristics of environments
Fully
observable?
Deterministic? Episodic? Static? Discrete? Single
agent?
Solitaire No Yes Yes Yes Yes Yes
Backgammon Yes No No Yes Yes No
Taxi driving No No No No No No
Internet
shopping
No No No No Yes No
Medical
diagnosis
No No No No No Yes
→ Lots of real-world domains fall into the hardest case!
Structure of agents
• Agent = architecture + program
• Architecture = some sort of computing device (sensors
+ actuators)
• (Agent) Program = some function that implements the
agent mapping = “?”
• Agent Program = Job of AI
Agent programs
• Skeleton design of an agent program
Types of agent programs
• Table-driven agents
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents
• Learning agents
(1) Table-driven agents
• Table lookup of percept-action pairs mapping from every
possible perceived state to the optimal action for that state
• Problems
• Too big to generate and to store (Chess has about 10120
states, for example)
• No knowledge of non-perceptual parts of the current state
• Not adaptive to changes in the environment; requires entire
table to be updated if changes occur
• Looping: Can’t make actions conditional on previous
actions/states
(1) Simple reflex agents
• Rule-based reasoning to map from percepts to
optimal action; each rule handles a collection of
perceived states
• Problems
• Still usually too big to generate and to store
• Still no knowledge of non-perceptual parts of state
• Still not adaptive to changes in the environment;
requires collection of rules to be updated if changes
occur
A Simple Reflex Agent in Nature
percepts
(size, motion)
RULES:
(1) If small moving object,
then activate SNAP
(2) If large moving object,
then activate AVOID and inhibit SNAP
ELSE (not moving) then NOOP
Action: SNAP or AVOID or NOOP
Simple Vacuum Reflex Agent
function Vacuum-Agent([location,status])
returns Action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
(1) Simple reflex agent architecture
(2) Model-based reflex agents
• Encode “internal state” of the world to remember
the past as contained in earlier percepts.
• Requires two types of knowledge
• How the world evolves independently of the agent?
• How the agent’s actions affect the world?
Model-based Reflex Agents
The agent is with memory
(2)Model-based agent architecture
(3) Goal-based agents
• Choose actions so as to achieve a (given or computed)
goal.
• A goal is a description of a desirable situation.
• Keeping track of the current state is often not enough 
need to add goals to decide which situations are good
• Deliberative instead of reactive.
• May have to consider long sequences of possible actions
before deciding if goal is achieved – involves consideration
of the future, “what will happen if I do...?”
Example: Tracking a Target
target
robot
• The robot must keep
the target in view
• The target’s trajectory
is not known in advance
• The robot may not know
all the obstacles in
advance
• Fast decision is required
(3) Architecture for goal-based agent
(4) Utility-based agents
• When there are multiple possible alternatives, how to
decide which one is best?
• A goal specifies a crude distinction between a happy and
unhappy state, but often need a more general performance
measure that describes “degree of happiness.”
• Utility function U: State  Reals indicating a measure of
success or happiness when at a given state.
• Allows decisions comparing choice between conflicting
goals, and choice between likelihood of success and
importance of goal (if achievement is uncertain).
(4) Architecture for a complete
utility-based agent
Learning Agents
• After an agent is programmed, can it work immediately?
• No, it still need teaching
• In AI,
• Once an agent is done
• We teach it by giving it a set of examples
• Test it by using another set of examples
• We then say the agent learns
• A learning agent
Learning Agents
• Four conceptual components
• Learning element
• Making improvement
• Performance element
• Selecting external actions
• Critic
• Tells the Learning element how well the agent is doing with respect to fixed
performance standard.
(Feedback from user or examples, good or not?)
• Problem generator
• Suggest actions that will lead to new and informative experiences.
Learning Agents
Summary: Agents
• An agent perceives and acts in an environment, has an architecture, and is
implemented by an agent program.
• Task environment – PEAS (Performance, Environment, Actuators, Sensors)
• An ideal agent always chooses the action which maximizes its expected
performance, given its percept sequence so far.
• An autonomous learning agent uses its own experience rather than built-in
knowledge of the environment by the designer.
• An agent program maps from percept to action and updates internal state.
• Reflex agents respond immediately to percepts.
• Goal-based agents act in order to achieve their goal(s).
• Utility-based agents maximize their own utility function.
• Representing knowledge is important for successful agent design.
• The most challenging environments are not fully observable, nondeterministic,
dynamic, and continuous

More Related Content

Similar to INTELLIGENT AGENTS.pptx

Intelligent Agents
Intelligent AgentsIntelligent Agents
Intelligent Agentsmarada0033
 
W2_Lec03_Lec04_Agents.pptx
W2_Lec03_Lec04_Agents.pptxW2_Lec03_Lec04_Agents.pptx
W2_Lec03_Lec04_Agents.pptxJavaid Iqbal
 
Artificial Intelligence and Machine Learning.pptx
Artificial Intelligence and Machine Learning.pptxArtificial Intelligence and Machine Learning.pptx
Artificial Intelligence and Machine Learning.pptxMANIPRADEEPS1
 
Intelligent agents.ppt
Intelligent agents.pptIntelligent agents.ppt
Intelligent agents.pptShilpaBhatia32
 
1.1 What are Agent and Environment.pptx
1.1 What are Agent and Environment.pptx1.1 What are Agent and Environment.pptx
1.1 What are Agent and Environment.pptxSuvamvlogs
 
Intelligent (Knowledge Based) agent in Artificial Intelligence
Intelligent (Knowledge Based) agent in Artificial IntelligenceIntelligent (Knowledge Based) agent in Artificial Intelligence
Intelligent (Knowledge Based) agent in Artificial IntelligenceKuppusamy P
 
CS 3491 Artificial Intelligence and Machine Learning Unit I Problem Solving
CS 3491 Artificial Intelligence and Machine Learning Unit I Problem SolvingCS 3491 Artificial Intelligence and Machine Learning Unit I Problem Solving
CS 3491 Artificial Intelligence and Machine Learning Unit I Problem SolvingBalamuruganV28
 
AI-Lec2-Agents.pptx
AI-Lec2-Agents.pptxAI-Lec2-Agents.pptx
AI-Lec2-Agents.pptxHirazNor
 
Lecture 1 about the Agents in AI & .pptx
Lecture 1 about the Agents in AI & .pptxLecture 1 about the Agents in AI & .pptx
Lecture 1 about the Agents in AI & .pptxbk996051
 

Similar to INTELLIGENT AGENTS.pptx (20)

Intelligent Agents
Intelligent AgentsIntelligent Agents
Intelligent Agents
 
Lec 2 agents
Lec 2 agentsLec 2 agents
Lec 2 agents
 
Agents1
Agents1Agents1
Agents1
 
AI Basic.pptx
AI Basic.pptxAI Basic.pptx
AI Basic.pptx
 
W2_Lec03_Lec04_Agents.pptx
W2_Lec03_Lec04_Agents.pptxW2_Lec03_Lec04_Agents.pptx
W2_Lec03_Lec04_Agents.pptx
 
Artificial Intelligence and Machine Learning.pptx
Artificial Intelligence and Machine Learning.pptxArtificial Intelligence and Machine Learning.pptx
Artificial Intelligence and Machine Learning.pptx
 
M2 agents
M2 agentsM2 agents
M2 agents
 
Lecture 2 Agents.pptx
Lecture 2 Agents.pptxLecture 2 Agents.pptx
Lecture 2 Agents.pptx
 
Intelligent Agents
Intelligent Agents Intelligent Agents
Intelligent Agents
 
agents.pdf
agents.pdfagents.pdf
agents.pdf
 
Intelligent agents.ppt
Intelligent agents.pptIntelligent agents.ppt
Intelligent agents.ppt
 
Agents_AI.ppt
Agents_AI.pptAgents_AI.ppt
Agents_AI.ppt
 
1.1 What are Agent and Environment.pptx
1.1 What are Agent and Environment.pptx1.1 What are Agent and Environment.pptx
1.1 What are Agent and Environment.pptx
 
Intelligent (Knowledge Based) agent in Artificial Intelligence
Intelligent (Knowledge Based) agent in Artificial IntelligenceIntelligent (Knowledge Based) agent in Artificial Intelligence
Intelligent (Knowledge Based) agent in Artificial Intelligence
 
AI week 2.pdf
AI week 2.pdfAI week 2.pdf
AI week 2.pdf
 
Lecture 2
Lecture 2Lecture 2
Lecture 2
 
CS 3491 Artificial Intelligence and Machine Learning Unit I Problem Solving
CS 3491 Artificial Intelligence and Machine Learning Unit I Problem SolvingCS 3491 Artificial Intelligence and Machine Learning Unit I Problem Solving
CS 3491 Artificial Intelligence and Machine Learning Unit I Problem Solving
 
AI-Lec2-Agents.pptx
AI-Lec2-Agents.pptxAI-Lec2-Agents.pptx
AI-Lec2-Agents.pptx
 
Unit 1.ppt
Unit 1.pptUnit 1.ppt
Unit 1.ppt
 
Lecture 1 about the Agents in AI & .pptx
Lecture 1 about the Agents in AI & .pptxLecture 1 about the Agents in AI & .pptx
Lecture 1 about the Agents in AI & .pptx
 

More from vipulkondekar

Unit 3 Data Quality and Preprocessing .pptx
Unit 3 Data Quality and Preprocessing .pptxUnit 3 Data Quality and Preprocessing .pptx
Unit 3 Data Quality and Preprocessing .pptxvipulkondekar
 
Unit 1 Introduction to Data Analytics .pptx
Unit 1 Introduction to Data Analytics .pptxUnit 1 Introduction to Data Analytics .pptx
Unit 1 Introduction to Data Analytics .pptxvipulkondekar
 
C Introduction and bascis of high level programming
C Introduction and bascis of high level programmingC Introduction and bascis of high level programming
C Introduction and bascis of high level programmingvipulkondekar
 
Analyzing patterns and statistics in data.pptx
Analyzing patterns and statistics in data.pptxAnalyzing patterns and statistics in data.pptx
Analyzing patterns and statistics in data.pptxvipulkondekar
 
Technology & business transformation and Career in UK.pptx
Technology & business transformation and Career in UK.pptxTechnology & business transformation and Career in UK.pptx
Technology & business transformation and Career in UK.pptxvipulkondekar
 
Machine Learning Introduction introducing basics of Machine Learning
Machine Learning Introduction introducing basics of Machine LearningMachine Learning Introduction introducing basics of Machine Learning
Machine Learning Introduction introducing basics of Machine Learningvipulkondekar
 
Min Max Artificial Intelligence algorithm
Min Max Artificial Intelligence algorithmMin Max Artificial Intelligence algorithm
Min Max Artificial Intelligence algorithmvipulkondekar
 
Cyclic Redundancy check approach for Error Detection
Cyclic Redundancy check approach for Error DetectionCyclic Redundancy check approach for Error Detection
Cyclic Redundancy check approach for Error Detectionvipulkondekar
 
Embedded System serial Communication.ppt
Embedded System serial Communication.pptEmbedded System serial Communication.ppt
Embedded System serial Communication.pptvipulkondekar
 
properties of the task environment in artificial intelligence system
properties of the task environment in artificial intelligence systemproperties of the task environment in artificial intelligence system
properties of the task environment in artificial intelligence systemvipulkondekar
 

More from vipulkondekar (12)

Unit 3 Data Quality and Preprocessing .pptx
Unit 3 Data Quality and Preprocessing .pptxUnit 3 Data Quality and Preprocessing .pptx
Unit 3 Data Quality and Preprocessing .pptx
 
Unit 1 Introduction to Data Analytics .pptx
Unit 1 Introduction to Data Analytics .pptxUnit 1 Introduction to Data Analytics .pptx
Unit 1 Introduction to Data Analytics .pptx
 
C Introduction and bascis of high level programming
C Introduction and bascis of high level programmingC Introduction and bascis of high level programming
C Introduction and bascis of high level programming
 
Analyzing patterns and statistics in data.pptx
Analyzing patterns and statistics in data.pptxAnalyzing patterns and statistics in data.pptx
Analyzing patterns and statistics in data.pptx
 
Technology & business transformation and Career in UK.pptx
Technology & business transformation and Career in UK.pptxTechnology & business transformation and Career in UK.pptx
Technology & business transformation and Career in UK.pptx
 
Machine Learning Introduction introducing basics of Machine Learning
Machine Learning Introduction introducing basics of Machine LearningMachine Learning Introduction introducing basics of Machine Learning
Machine Learning Introduction introducing basics of Machine Learning
 
Min Max Artificial Intelligence algorithm
Min Max Artificial Intelligence algorithmMin Max Artificial Intelligence algorithm
Min Max Artificial Intelligence algorithm
 
Cyclic Redundancy check approach for Error Detection
Cyclic Redundancy check approach for Error DetectionCyclic Redundancy check approach for Error Detection
Cyclic Redundancy check approach for Error Detection
 
Embedded System serial Communication.ppt
Embedded System serial Communication.pptEmbedded System serial Communication.ppt
Embedded System serial Communication.ppt
 
properties of the task environment in artificial intelligence system
properties of the task environment in artificial intelligence systemproperties of the task environment in artificial intelligence system
properties of the task environment in artificial intelligence system
 
AI 1.pptx
AI 1.pptxAI 1.pptx
AI 1.pptx
 
DC ISE QP E&TC.doc
DC ISE QP E&TC.docDC ISE QP E&TC.doc
DC ISE QP E&TC.doc
 

Recently uploaded

Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfme23b1001
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learningmisbanausheenparvam
 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2RajaP95
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxvipinkmenon1
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfAsst.prof M.Gokilavani
 
Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .Satyam Kumar
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and usesDevarapalliHaritha
 

Recently uploaded (20)

Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdf
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
 
POWER SYSTEMS-1 Complete notes examples
POWER SYSTEMS-1 Complete notes  examplesPOWER SYSTEMS-1 Complete notes  examples
POWER SYSTEMS-1 Complete notes examples
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learning
 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptx
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
 
Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .
 
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Serviceyoung call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
 
Design and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdfDesign and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdf
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and uses
 

INTELLIGENT AGENTS.pptx

  • 3. Agent and Environment • Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through its effectors/actuators. • Example: • Human agent • Robotic agent • Software agent
  • 4. Simple Terms -- [PAGE] • Percept • Agent’s perceptual inputs at any given instant • Percept sequence • Complete history of everything that the agent has ever perceived. • Action • An operation involving an actuator • Actions can be grouped into action sequences
  • 5. A Windshield Wiper Agent How do we design a agent that can wipe the windshields when needed? • Goals? • Percepts? • Sensors? • Effectors? • Actions? • Environment?
  • 6. A Windshield Wiper Agent (Cont’d) • Goals: Keep windshields clean & maintain visibility • Percepts: Raining, Dirty • Sensors: Camera (moist sensor) • Effectors: Wipers (left, right, back) • Actions: Off, Slow, Medium, Fast • Environment: Inner city, highways, weather
  • 7. Interacting Agents Collision Avoidance Agent (CAA) • Goals: Avoid running into obstacles • Percepts ? • Sensors? • Effectors ? • Actions ? • Environment: Freeway Lane Keeping Agent (LKA) • Goals: Stay in current lane • Percepts ? • Sensors? • Effectors ? • Actions ? • Environment: Freeway
  • 8. Interacting Agents Collision Avoidance Agent (CAA) • Goals: Avoid running into obstacles • Percepts: Obstacle distance, velocity, trajectory • Sensors: Vision, proximity sensing • Effectors: Steering Wheel, Accelerator, Brakes, Horn, Headlights • Actions: Steer, speed up, brake, blow horn, signal (headlights) • Environment: Highway Lane Keeping Agent (LKA) • Goals: Stay in current lane • Percepts: Lane center, lane boundaries • Sensors: Vision • Effectors: Steering Wheel, Accelerator, Brakes • Actions: Steer, speed up, brake • Environment: Highway
  • 9. Agent function & program • Agent’s behavior is mathematically described by • Agent function • A function mapping any given percept sequence to an action • Practically it is described by • An agent program • The real implementation
  • 10. Vacuum-cleaner world • Perception: Clean or Dirty? where it is in? • Actions: Move left, Move right, suck, do nothing
  • 12. Program implements the agent function Function Reflex-Vacuum-Agent([location,statuse]) return an action If status = Dirty then return Suck else if location = A then return Right else if location = B then return left
  • 13. Agents • Have sensors, actuators, goals • Agent program • Implements mapping from percept sequences to actions • Performance measure to evaluate agents • Autonomous agent decide autonomously which action to take in the current situation to maximize the progress towards its goals.
  • 14. Behavior and performance of Agents in terms of agent function • Perception (sequence) to Action Mapping: • Ideal mapping: specifies which actions an agent ought to take at any point in time • Description: Look-Up-Table • Performance measure: a subjective measure to characterize how successful an agent is (e.g., speed, power usage, accuracy, money, etc.) • (degree of) Autonomy: to what extent is the agent able to make decisions and take actions on its own?
  • 15. Performance measure • A general rule: • Design performance measures according to • What one actually wants in the environment • Rather than how one thinks the agent should behave • E.g., in vacuum-cleaner world • We want the floor clean, no matter how the agent behave • We don’t restrict how the agent behaves
  • 16. Agents • Fundamental faculties of intelligence • Acting • Sensing • Understanding, reasoning and learning • In order to act you must sense. • Robotics: Sensing and acting, understanding is not necessary
  • 17. Intelligent Agents • Must sense • Must act • Must be autonomous • Must be rational
  • 18. Rational Agent • AI is about building rational agents • A rational agent always does the right thing. • What are the functionalities? • What are the components? • How do we build them?
  • 19. How is an Agent different from other software? • Agents are autonomous, that is, they act on behalf of the user • Agents contain some level of intelligence, from fixed rules to learning engines that allow them to adapt to changes in the environment • Agents don't only act reactively, but sometimes also proactively
  • 20. How is an Agent different from other software? • Agents have social ability, that is, they communicate with the user, the system, and other agents as required • Agents may also cooperate with other agents to carry out more complex tasks than they themselves can handle • Agents may migrate from one system to another to access remote resources or even to meet other agents
  • 21. Rationality • What is rational at any given time depends on four things: • The performance measure defining the criterion of success • The agent’s prior knowledge of the environment • The actions that the agent can perform • The agents’s percept sequence up to now
  • 22. Rational agent • For each possible percept sequence, • an rational agent should select • an action expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has • E.g., an exam • Maximize marks, based on the questions on the paper & your knowledge
  • 23. Example of a rational agent • Performance measure • Awards one point for each clean square • at each time step, over a lifetime of 10000 time steps • Prior knowledge about the environment • The geography of the environment • Only two squares • The effect of the actions
  • 24. Example of a rational agent • Actions that can perform • Left, Right, Suck and No Op • Percept sequences • Where is the agent? • Whether the location contains dirt? • Under this circumstance, the agent is rational.
  • 25. • An omniscient agent • Knows the actual outcome of its actions in advance • No other possible outcomes • However, impossible in real world Omniscience
  • 26. • Based on the circumstance, it is rational. • As rationality maximizes • Expected performance • Perfection maximizes • Actual performance • Hence rational agents are not omniscient. Omniscience
  • 27. Learning • Does a rational agent depend on only current percept? • No, the past percept sequence should also be used • This is called learning • After experiencing an episode, the agent • should adjust its behaviors to perform better for the same job next time.
  • 28. Autonomy • If an agent just relies on the prior knowledge of its designer rather than its own percepts then the agent lacks autonomy A rational agent should be autonomous- it should learn what it can to compensate for partial or incorrect prior knowledge.
  • 29. Nature of Environments • Task environments are the problems • While the rational agents are the solutions • Specifying the task environment through PEAS • In designing an agent, the first step must always be to specify the task environment as fully as possible. • Eg: Automated taxi driver
  • 30. Task environments • Performance measure • How can we judge the automated driver? • Which factors are considered? • getting to the correct destination • minimizing fuel consumption • minimizing the trip time and/or cost • minimizing the violations of traffic laws • maximizing the safety and comfort, etc.
  • 31. • Environment • A taxi must deal with a variety of roads • Traffic lights, other vehicles, pedestrians, stray animals, road works, police cars, etc. • Interact with the customer Task environments
  • 32. • Actuators (for outputs) • Control over the accelerator, steering, gear shifting and braking • A display to communicate with the customers • Sensors (for inputs) • Detect other vehicles, road situations • GPS (Global Positioning System) • Odometer, engine sensors…… Task environments
  • 33. Properties of task environments • Fully observable vs. Partially observable • If an agent’s sensors give it access to the complete state of the environment at each point in time then the environment is fully observable • An environment might be Partially observable because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data. • Fully observable environments are convinient because the agent need not manitain any internal state to keep track of the world.
  • 34. • Single agent VS. multiagent • Playing a crossword puzzle – single agent • Chess playing – two agents • Competitive multiagent environment • Chess playing • Cooperative multiagent environment • Automated taxi driver • Avoiding collision Properties of task environments
  • 35. • Deterministic vs. stochastic • next state of the environment Completely determined by the current state and the actions executed by the agent, then the environment is deterministic, otherwise, it is Stochastic. • Environment is uncertain if it is not fully observable or not deterministic • Outcomes are quantified in terms of probability -taxi driver is Stochastic - Vacuum cleaner may be deterministic or stochastic Properties of task environments
  • 36. • Episodic vs. sequential • An episode = agent’s single pair of perception & action • The quality of the agent’s action does not depend on other episodes • Every episode is independent of each other • Episodic environment is simpler • The agent does not need to think ahead • Sequential • Current action may affect all future decisions -Ex. Taxi driving and chess. Properties of task environments
  • 37. • Static vs. dynamic • A dynamic environment is always changing over time • E.g., the number of people in the street • While static environment • E.g., the destination • Semidynamic • environment is not changed over time • but the agent’s performance score does • E.g., chess when played with a clock Properties of task environments
  • 38. • Discrete vs. continuous • If there are a limited number of distinct states, clearly defined percepts and actions, the environment is discrete • E.g., Chess game, Taxi driving Properties of task environments
  • 39. Properties of task environments • Known vs. unknown • This distinction refers not to the environment itslef but to the agent’s (or designer’s) state of knowledge about the environment. • In known environment, the outcomes for all actions are given. ( example: solitaire card games). • If the environment is unknown, the agent will have to learn how it works in order to make good decisions.( example: new video game).
  • 40. • Fully observable vs. Partially observable • Single agent VS. multiagent • Deterministic vs. stochastic • Episodic vs. sequential • Static vs. dynamic • Discrete vs. continuous • Known vs. unknown Properties of task environments
  • 41. Examples of task environments
  • 42. Characteristics of environments Fully observable? Deterministic? Episodic? Static? Discrete? Single agent? Solitaire Backgammon Taxi driving Internet shopping Medical diagnosis
  • 43. Characteristics of environments Accessible Deterministic Episodic Static Discrete? Single agent? Solitaire No Yes Yes Yes Yes Yes Backgammon Taxi driving Internet shopping Medical diagnosis
  • 44. Characteristics of environments Fully observable? Deterministic? Episodic? Static? Discrete? Single agent? Solitaire No Yes Yes Yes Yes Yes Backgammon Yes No No Yes Yes No Taxi driving Internet shopping Medical diagnosis
  • 45. Characteristics of environments Fully observable? Deterministic? Episodic? Static? Discrete? Single agent? Solitaire No Yes Yes Yes Yes Yes Backgammon Yes No No Yes Yes No Taxi driving No No No No No No Internet shopping Medical diagnosis
  • 46. Characteristics of environments Fully observable? Deterministic? Episodic? Static? Discrete? Single agent? Solitaire No Yes Yes Yes Yes Yes Backgammon Yes No No Yes Yes No Taxi driving No No No No No No Internet shopping No No No No Yes No Medical diagnosis
  • 47. Characteristics of environments Fully observable? Deterministic? Episodic? Static? Discrete? Single agent? Solitaire No Yes Yes Yes Yes Yes Backgammon Yes No No Yes Yes No Taxi driving No No No No No No Internet shopping No No No No Yes No Medical diagnosis No No No No No Yes → Lots of real-world domains fall into the hardest case!
  • 48. Structure of agents • Agent = architecture + program • Architecture = some sort of computing device (sensors + actuators) • (Agent) Program = some function that implements the agent mapping = “?” • Agent Program = Job of AI
  • 49. Agent programs • Skeleton design of an agent program
  • 50. Types of agent programs • Table-driven agents • Simple reflex agents • Model-based reflex agents • Goal-based agents • Utility-based agents • Learning agents
  • 51. (1) Table-driven agents • Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state • Problems • Too big to generate and to store (Chess has about 10120 states, for example) • No knowledge of non-perceptual parts of the current state • Not adaptive to changes in the environment; requires entire table to be updated if changes occur • Looping: Can’t make actions conditional on previous actions/states
  • 52. (1) Simple reflex agents • Rule-based reasoning to map from percepts to optimal action; each rule handles a collection of perceived states • Problems • Still usually too big to generate and to store • Still no knowledge of non-perceptual parts of state • Still not adaptive to changes in the environment; requires collection of rules to be updated if changes occur
  • 53. A Simple Reflex Agent in Nature percepts (size, motion) RULES: (1) If small moving object, then activate SNAP (2) If large moving object, then activate AVOID and inhibit SNAP ELSE (not moving) then NOOP Action: SNAP or AVOID or NOOP
  • 54. Simple Vacuum Reflex Agent function Vacuum-Agent([location,status]) returns Action if status = Dirty then return Suck else if location = A then return Right else if location = B then return Left
  • 55. (1) Simple reflex agent architecture
  • 56. (2) Model-based reflex agents • Encode “internal state” of the world to remember the past as contained in earlier percepts. • Requires two types of knowledge • How the world evolves independently of the agent? • How the agent’s actions affect the world?
  • 57. Model-based Reflex Agents The agent is with memory
  • 59. (3) Goal-based agents • Choose actions so as to achieve a (given or computed) goal. • A goal is a description of a desirable situation. • Keeping track of the current state is often not enough  need to add goals to decide which situations are good • Deliberative instead of reactive. • May have to consider long sequences of possible actions before deciding if goal is achieved – involves consideration of the future, “what will happen if I do...?”
  • 60. Example: Tracking a Target target robot • The robot must keep the target in view • The target’s trajectory is not known in advance • The robot may not know all the obstacles in advance • Fast decision is required
  • 61. (3) Architecture for goal-based agent
  • 62. (4) Utility-based agents • When there are multiple possible alternatives, how to decide which one is best? • A goal specifies a crude distinction between a happy and unhappy state, but often need a more general performance measure that describes “degree of happiness.” • Utility function U: State  Reals indicating a measure of success or happiness when at a given state. • Allows decisions comparing choice between conflicting goals, and choice between likelihood of success and importance of goal (if achievement is uncertain).
  • 63. (4) Architecture for a complete utility-based agent
  • 64. Learning Agents • After an agent is programmed, can it work immediately? • No, it still need teaching • In AI, • Once an agent is done • We teach it by giving it a set of examples • Test it by using another set of examples • We then say the agent learns • A learning agent
  • 65. Learning Agents • Four conceptual components • Learning element • Making improvement • Performance element • Selecting external actions • Critic • Tells the Learning element how well the agent is doing with respect to fixed performance standard. (Feedback from user or examples, good or not?) • Problem generator • Suggest actions that will lead to new and informative experiences.
  • 67. Summary: Agents • An agent perceives and acts in an environment, has an architecture, and is implemented by an agent program. • Task environment – PEAS (Performance, Environment, Actuators, Sensors) • An ideal agent always chooses the action which maximizes its expected performance, given its percept sequence so far. • An autonomous learning agent uses its own experience rather than built-in knowledge of the environment by the designer. • An agent program maps from percept to action and updates internal state. • Reflex agents respond immediately to percepts. • Goal-based agents act in order to achieve their goal(s). • Utility-based agents maximize their own utility function. • Representing knowledge is important for successful agent design. • The most challenging environments are not fully observable, nondeterministic, dynamic, and continuous