SlideShare a Scribd company logo
Intelligent Agents
Why Rational Agent Approach?
Because this concept of
developing a smart set of
design principles for
building successful
agents, systems that can
reasonably be called
intelligent, is Central to
artificial intelligence.
● An agent perceives its
environment through sensors
and then takes action upon that
environment through actuators,
a mechanism that puts
something into automatic action.
● A software agent receives
keystrokes, file contents, And
network packets as input and
Then it acts on the environment
by displaying output on the
screen or sending network
packets.
● The word ‘percept’ means Inputs that
an agent receives through sensors at
any instance of time. For example, a
human agent has sense organs for
perceiving percepts and hands, legs,
vocal tract, and so on as actuators.
Similarly, a robotic agent might have
cameras and infrared range finders
For sensing the environment and
various motors as actuators.
● The percept sequence of an agent
shows the history of everything that an
agent has ever perceived and the
percept sequence helps an agent in
choosing an action at any given time.
● An agent’s function speaks its (or His) behavior. The
function of an agent is a result of the mapping of a percept
sequence to an action, and so a function can be called an
abstract mathematical description. By tabulating all
possible percept sequences and their corresponding
actions, we can externally characterize the agent's
behavior.
● An agent program is an internal characterization of an
agent. This program will be a concrete implementation
running on a physical system.
● For Example, The agent function for our cleaning robot is
to clean a room. It perceives its environment through
sensors, determines the dirtiness level, and takes actions
to clean the room efficiently and an agent program might
be a piece of software that runs on the robot's control
system.
● When an agent performs a randomized action, it is considered very silly but it may
be very intelligent. So, if we find that an agent uses some randomization to choose
its action, then we would have to identify the probability of each action by trying
each action many times.
● Before we close this session we should pay attention to the very notion that an
agent is made to be a tool for analyzing systems. But in all areas of Engineering,
an agent makes a non-trivial decision to display an action that it had perceived in
history for a percept sequence. For example, a hand-held calculator shows
“2+2=4”. This result “4” is the result of a history of percept sequence.
________
Good Behavior:
The Concept of Rationality
2
● An agent is one that does the right thing. To judge whether the task has been done
rightly or not, we capture the sequence of actions that causes the environment to go
through a sequence of states. If the sequence of states is desirable then this notion of
desirability is the performance measure to evaluate the rationality of an agent.
● We don’t need to evaluate the agent’s state, we need to evaluate environment states. If
we evaluate an agent’s states, then the agent can achieve rationality by deluding itself
of its performance as its perfect performance. So, as a general rule, It is better to design
performance measures according to what one actually wants in the environment rather
than, according to how one thinks the agent should behave.
2.1 Rationality
Rational at any given time depends :
• The performance measure that defines the criterion of success.
• The agent’s prior knowledge of the environment.
• The actions that the agent can perform.
• The agent’s percept sequence to date.
● So, for each possible percept sequence, a rational agent should select an action
that is expected to maximize its performance measure, given the evidence
provided by the percept sequence and whatever built-in knowledge the agent has.
2.2 Omniscience, Learning, and
Autonomy
● An omniscient agent knows the
actual outcome of its actions and
can act accordingly. But in reality,
Omniscience is impossible.
● Rationality maximizes expected
performance while perfection
maximizes actual performance.
● The point is that we expect an
agent to do what turns out to be
the best action after the fact.But
unless we improve agent’s
performance, it will be impossible
to design an agent to fulfill the
specification.
● Doing actions in order to modify
future percepts, sometimes called
information gathering, is an
important part of rationality.
● Exploration of an environment is
another way for information
gathering.
● Some agents may initially have
prior knowledge, but they must
adapt and modify their
understanding based on
experience. So, a rational agent
shouldn't just gather knowledge
but also learn as much as possible
from what it perceives
● If an agent relies on the prior knowledge of its designer rather than on its own
percepts, we can say that the agent lacks autonomy.
● An autonomous agent learns to compensate for its partial or incorrect prior
knowledge. So, a rational agent should be autonomous.
● After a sufficient experience of its environment, the behavior of a rational agent
can become effectively independent of its prior knowledge. In other words, this
independency will make a rational agent autonomous.
_________
The
Nature
Of
Environments
3
● The nature of environment for
an agent is described by
“Specifying the Task
Environment” and “
Properties of Task
Environments”.
3.1 Specifying task
environment
● To specify the task
environment of an agent, we
need to specify PEAS(
Performance measure,
Environment, Actuators, and
Sensors).
● Here’s an instance of specification of the environment of a self-driving car.
● Performance measures for the above example include getting to the correct
destination; minimizing fuel consumption and wear and tear; minimizing the trip
time or cost; minimizing violations of traffic laws and disturbances to other
drivers; maximizing safety and passenger comfort; and maximizing profits.
● The environment includes a variety of roads, ranging from rural lanes and urban
alleys to 12-lane freeways. The roads contain other traffic, pedestrians, stray
animals, road works, police cars, puddles, and potholes. The car may also need to
interact with potential and actual passenger
● The actuators for an automated taxi include those available to a human driver:
control over the engine through the accelerator, control over steering and braking,
and a display screen or voice synthesizer to communicate to the passengers.
● The basic sensors for the taxi will include one or more controllable video cameras
so that it can see the road, a speedometer to control the vehicle properly,
especially on curves, GPS, global positioning system, so that it doesn’t get lost,
and a keyboard or microphone for the passenger to request a destination.
● Softbot or Software agent: a "softbot" refers to a software agent or robot
designed to operate on the Internet.
- Example: a softbot Web site operator designed to scan Internet news sources
and show interesting items to its users, while selling advertising space to
generate revenue.
- The characteristics attributed to this softbot include the need for natural
language processing abilities, the ability to learn user and advertiser
preferences, and the capacity to dynamically adjust plans in response to
changes, such as the connection status of news sources.
- a softbot in this context is an intelligent software agent tailored for web
operations, dealing with the complexity of the online environment and
interacting with both artificial and human agents.
3.2 Properties of task environment
● Fully observable: an agent’s sensors give it access to the complete state of the
environment at each point in time. An environment might be partially observable
because of noisy and inaccurate sensors or because parts of the state are simply
missing from the sensor data. If the agent has no sensors at all then the environment is
Unobservable.
● Single-agent and multiagent: an agent solving a crossword puzzle by itself is clearly
in a single-agent environment, whereas an agent playing chess is in a two-agent
environment
● Chess is a competitive multi agent environment. In the taxi-driving environment, on the
other hand, avoiding collisions maximizes the performance measure of all agents, so it
is a partially cooperative multiagent environment.
● Deterministic vs. stochastic: If the next state of the environment is completely
determined by the current state and the action executed by the agent, then we say the
environment is deterministic; otherwise, it is stochastic. "Stochastic" refers to a process
or phenomenon that involves a random element or probability.
● We say an environment is uncertain if it is not fully observable or not deterministic.
● A non-deterministic environment is one in which actions are characterized by their
possible outcomes, but no probabilities are attached to them. Nondeterministic
environment descriptions are usually associated with performance measures that
require the agent to succeed for all possible outcomes of its actions.
● In an episodic task environment, the agent’s experience is divided into atomic
episodes. In each episode, the agent receives a percept and then performs a single
action. Crucially, the next episode does not depend on the actions taken in previous
episodes.For example, an agent that has to spot defective parts on an assembly
line bases each decision on the current part, regardless of previous decisions;
moreover, the current decision doesn’t affect whether the next part is defective.
● In sequential environments, on the other hand, the current decision could affect all
future decisions.3 Chess and taxi driving are sequential: in both cases, short-term
actions can have long-term consequences.
● If the environment can change while an agent is deliberating, then we say the
environment is dynamic for that agent; otherwise, it is static. Static environments
are easy to deal with because the agent need not keep looking at the world while it
is deciding on an action, nor need it worry about the passage of time. Crossword
puzzles are static.
● If the environment itself does not change with the passage of time but the agent’s
performance score does, then we say the environment is semi-dynamic. Chess,
when played with a clock, is semi-dynamic.
● The state of knowledge of an agent determines whether it is in a known
environment or an unknown environment. In a known environment, the outcomes
(or outcome probabilities if the environment is stochastic) for all actions are given.
If the environment is unknown, the agent will have to learn how it works in order to
make good decisions.
● The hardest cases are partially observable, multiagent, stochastic, sequential,
dynamic, continuous, and unknown.
Here are examples of task environments and their characteristics:
● One or more agent(s) are placed in a simulated environment and their
behavior is observed over time and their evolution is done according to a given
performance measure. Such experiments are often carried out not for a single
environment but for many environments drawn from an environment class.
● For example, to evaluate a taxi driver in simulated traffic, we would want to run
many simulations with different traffic, lighting, and weather conditions.
● An environment generator for each environment class that selects particular
environments (with certain likelihoods) in which to run the agent.
_______________________
How the component of an agent program work ?
● The component of the environment of an agent can be expressed on an axis of
increasing complexity and increasing expressiveness- Atomic, Factored, and
structure.
● In an atomic representation, each state of the world is indivisible. For example, from
going one end to another end of a country, you can break down the route with the name
of cities you will cross while going from one end to another end.The algorithms
‘searching’, game-playing, Hidden Markov Model, and Markov Decision model work on
atomic representation.
● In factored representation, we need to do consider more than just one atomic city.For
example, We need to consider the amount of gas in tank, our current GPS coordinates,
whether or not the oil warning light is working. A factored representation breaks down
each state in attributes or variable, each of which can have some value.
● The areas of AI based on factored representation include constraint satisfaction
algorithm, propositional logic, planning, Bayesian Network, and machine learning
algorithm.
● Sometimes we need things related to each other, not just variables and values. For
example, while driving a car from one end to another their might a situation when at a
turn a cow stops car from moving ahead. What can we do in that situation?
● Structured representation underlie RDB, first-order logic, first order probability model,
knowledge based reasoning, natural language understanding.
● In fact, everything that we human concerns is expressed in objects and their relationship
● As it can be seen, as we move forward from atomic to structured representation, the
expressiveness of components of an environment increases.
● To gain the benefits from expressive representations while avoiding their drawback, an
intelligent system for real world may need to operate all points of the axis simultaneously.
_______________________
The Structure Of
Agent
4
● The job of AI is to design an agent program that implements the agent function—
the mapping from percepts to actions. We assume this program will run on some
sort of computing device with physical sensors and actuators—we call this the
architecture:
Architecture Agent
Program
● The difference between the agent program, which takes the current percept as
input, and the agent function, which takes the entire percept history
● Four basic kinds of agent programs:
○ Simple reflex agents
○ Model-based reflex agents
○ Goal-based agents
○ Utility-based agents.
● The simplest kind of agent is the simple reflex agent. This agent selects actions
on the basis of the current percept, ignoring the rest of percept history. For
example, if a self-driving car locates an object in front of it it will decide an action
based on that current situation. The self-driving car decides to initiate breaking
when it finds an object in front. This triggers some established connection in the
agent program to the action “initiate breaking”. we call such a connection or
condition-action rule
● Simple reflex agents do have admirable properties but they turn out to be of limited
intelligence. When they observe the environment partially, they get stuck in an
infinite loop of actions, then the agent can randomize its actions to escape from an
infinite loop.
● The best way to handle partial observability for the agent is to keep track of the part
of the world it cannot see now. That is, the agent should maintain some sort of
internal state that depends on the percept history and thereby reflects at least
some of the unobserved aspects of the current state.
● When the information about the world that the agent cannot see or the information
about how the agent’s actions affect the world or the information about the world
evolves independently of the agent is implemented in a simple boolean circuit or
incomplete scientific theory is called a model of the world. An agent that uses such
a model is called a model-based agent
● Knowing something about the current state of the environment is not always
enough to decide what to do. For example, at a road junction, the taxi can turn left,
turn right, or go straight on. The correct decision depends on where the taxi is
trying to get to. In other words, it can be said now the agent needs some sort of
goal information. Once the agent gets goal information, it combines this information
with the model to choose actions that achieve the goal. Now, the agent is a goal-
based agent.
● Goals alone are not enough to generate high-quality behavior in most
environments. For example, many action sequences will get the taxi to its
destination (thereby achieving the goal) but some are quicker, safer, more reliable,
or cheaper than others. This is achieved by Utility-based agents.
● An agent’s utility function is essentially an internalization of the performance
measure.
● Technically speaking, a rational utility-based agent chooses the action that
maximizes the expected utility of the action outcomes
● We discovered how an agent program selects action with various methods. But
we must also know how the agent program came into being.
● Turin gave a method that proposes to build learning machines and then to teach
them.
● A learning agent can be divided into four conceptual components, as shown
below.
● The learning element is responsible for making improvements and the
performance element is responsible for selecting external actions.
The learning element uses feedback from the critic on how the agent
is doing and determines how the performance element should be
modified to do better in the future.
● The last component of the learning agent is the problem generator. It is
responsible for suggesting actions that will lead to new and informative
experiences.
____________
• An agent is something that perceives and acts in an environment. The agent
function for an agent specifies the action taken by the agent in response to any
percept sequence.
• The performance measure evaluates the behavior of the agent in an
environment. A rational agent acts so as to maximize the expected value of the
performance measure, given the percept sequence it has seen so far.
• A task environment specification includes the performance measure, the external
environment, the actuators, and the sensors. In designing an agent, the first step
must always be to specify the task environment as fully as possible.
• Task environments vary along several significant dimensions. They can be fully
or partially observable, single-agent or multiagent, deterministic or stochastic,
episodic or sequential, static or dynamic, discrete or continuous, and known or
unknown.
• The agent program implements the agent function. There exists a variety of basic
agent-program designs reflecting the kind of information made explicit and used in
the decision process. The designs vary in efficiency, compactness, and flexibility.
The appropriate design of the agent program depends on the nature of the
environment.
• Simple reflex agents respond directly to percepts, whereas model-based reflex
agents maintain internal state to track aspects of the world that are not evident in
the current percept. Goal-based agents act to achieve their goals, and utility-
based agents try to maximize their own expected “happiness” .
• All agents can improve their performance through learning.
_____________

More Related Content

Similar to Intelligent Agents, A discovery on How A Rational Agent Acts

Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
Vinod Kumar Meghwar
 
Lecture 2 agent and environment
Lecture 2   agent and environmentLecture 2   agent and environment
Lecture 2 agent and environment
Vajira Thambawita
 
Introduction To Artificial Intelligence
Introduction To Artificial IntelligenceIntroduction To Artificial Intelligence
Introduction To Artificial Intelligence
NeHal VeRma
 
Introduction To Artificial Intelligence
Introduction To Artificial IntelligenceIntroduction To Artificial Intelligence
Introduction To Artificial Intelligence
NeHal VeRma
 
Artificial intelligence introduction
Artificial intelligence introductionArtificial intelligence introduction
Artificial intelligence introduction
melchismel
 
Intelligent Agents
Intelligent Agents Intelligent Agents
Intelligent Agents
Amar Jukuntla
 
CS Artificial intelligence chapter 2.pptx
CS Artificial intelligence chapter 2.pptxCS Artificial intelligence chapter 2.pptx
CS Artificial intelligence chapter 2.pptx
ethiouniverse
 
Agents_AI.ppt
Agents_AI.pptAgents_AI.ppt
Agents_AI.ppt
sandeep54552
 
W2_Lec03_Lec04_Agents.pptx
W2_Lec03_Lec04_Agents.pptxW2_Lec03_Lec04_Agents.pptx
W2_Lec03_Lec04_Agents.pptx
Javaid Iqbal
 
intelligentagent-140313053301-phpapp01 (1).pdf
intelligentagent-140313053301-phpapp01 (1).pdfintelligentagent-140313053301-phpapp01 (1).pdf
intelligentagent-140313053301-phpapp01 (1).pdf
ShivareddyGangam
 
Intelligent agent
Intelligent agentIntelligent agent
Intelligent agent
Geeta Jaswani
 
Week 3.pdf
Week 3.pdfWeek 3.pdf
Week 3.pdf
ZamshedForman1
 
Intelligence Agent - Artificial Intelligent (AI)
Intelligence Agent - Artificial Intelligent (AI)Intelligence Agent - Artificial Intelligent (AI)
Intelligence Agent - Artificial Intelligent (AI)
mufassirin
 
Intelligent Agents
Intelligent AgentsIntelligent Agents
Intelligent Agents
Pankaj Debbarma
 
AI Lesson 02
AI Lesson 02AI Lesson 02
AI Lesson 02
Assistant Professor
 
Intelligent AGent class.pptx
Intelligent AGent class.pptxIntelligent AGent class.pptx
Intelligent AGent class.pptx
AdJamesJohn
 
AI - Agents & Environments
AI - Agents & EnvironmentsAI - Agents & Environments
AI - Agents & Environments
Learnbay Datascience
 
AI_Ch2.pptx
AI_Ch2.pptxAI_Ch2.pptx
AI_Ch2.pptx
qwtadhsaber
 
Lecture 4 (1).pptx
Lecture 4 (1).pptxLecture 4 (1).pptx
Lecture 4 (1).pptx
SumairaRasool6
 
intelligentagent-190406015753.pptx
intelligentagent-190406015753.pptxintelligentagent-190406015753.pptx
intelligentagent-190406015753.pptx
SandipPradhan23
 

Similar to Intelligent Agents, A discovery on How A Rational Agent Acts (20)

Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
Lecture 2 agent and environment
Lecture 2   agent and environmentLecture 2   agent and environment
Lecture 2 agent and environment
 
Introduction To Artificial Intelligence
Introduction To Artificial IntelligenceIntroduction To Artificial Intelligence
Introduction To Artificial Intelligence
 
Introduction To Artificial Intelligence
Introduction To Artificial IntelligenceIntroduction To Artificial Intelligence
Introduction To Artificial Intelligence
 
Artificial intelligence introduction
Artificial intelligence introductionArtificial intelligence introduction
Artificial intelligence introduction
 
Intelligent Agents
Intelligent Agents Intelligent Agents
Intelligent Agents
 
CS Artificial intelligence chapter 2.pptx
CS Artificial intelligence chapter 2.pptxCS Artificial intelligence chapter 2.pptx
CS Artificial intelligence chapter 2.pptx
 
Agents_AI.ppt
Agents_AI.pptAgents_AI.ppt
Agents_AI.ppt
 
W2_Lec03_Lec04_Agents.pptx
W2_Lec03_Lec04_Agents.pptxW2_Lec03_Lec04_Agents.pptx
W2_Lec03_Lec04_Agents.pptx
 
intelligentagent-140313053301-phpapp01 (1).pdf
intelligentagent-140313053301-phpapp01 (1).pdfintelligentagent-140313053301-phpapp01 (1).pdf
intelligentagent-140313053301-phpapp01 (1).pdf
 
Intelligent agent
Intelligent agentIntelligent agent
Intelligent agent
 
Week 3.pdf
Week 3.pdfWeek 3.pdf
Week 3.pdf
 
Intelligence Agent - Artificial Intelligent (AI)
Intelligence Agent - Artificial Intelligent (AI)Intelligence Agent - Artificial Intelligent (AI)
Intelligence Agent - Artificial Intelligent (AI)
 
Intelligent Agents
Intelligent AgentsIntelligent Agents
Intelligent Agents
 
AI Lesson 02
AI Lesson 02AI Lesson 02
AI Lesson 02
 
Intelligent AGent class.pptx
Intelligent AGent class.pptxIntelligent AGent class.pptx
Intelligent AGent class.pptx
 
AI - Agents & Environments
AI - Agents & EnvironmentsAI - Agents & Environments
AI - Agents & Environments
 
AI_Ch2.pptx
AI_Ch2.pptxAI_Ch2.pptx
AI_Ch2.pptx
 
Lecture 4 (1).pptx
Lecture 4 (1).pptxLecture 4 (1).pptx
Lecture 4 (1).pptx
 
intelligentagent-190406015753.pptx
intelligentagent-190406015753.pptxintelligentagent-190406015753.pptx
intelligentagent-190406015753.pptx
 

Recently uploaded

Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...
Transcat
 
一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理
一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理
一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理
upoux
 
Generative AI Use cases applications solutions and implementation.pdf
Generative AI Use cases applications solutions and implementation.pdfGenerative AI Use cases applications solutions and implementation.pdf
Generative AI Use cases applications solutions and implementation.pdf
mahaffeycheryld
 
Pressure Relief valve used in flow line to release the over pressure at our d...
Pressure Relief valve used in flow line to release the over pressure at our d...Pressure Relief valve used in flow line to release the over pressure at our d...
Pressure Relief valve used in flow line to release the over pressure at our d...
cannyengineerings
 
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Sinan KOZAK
 
Call For Paper -3rd International Conference on Artificial Intelligence Advan...
Call For Paper -3rd International Conference on Artificial Intelligence Advan...Call For Paper -3rd International Conference on Artificial Intelligence Advan...
Call For Paper -3rd International Conference on Artificial Intelligence Advan...
ijseajournal
 
Digital Twins Computer Networking Paper Presentation.pptx
Digital Twins Computer Networking Paper Presentation.pptxDigital Twins Computer Networking Paper Presentation.pptx
Digital Twins Computer Networking Paper Presentation.pptx
aryanpankaj78
 
Data Driven Maintenance | UReason Webinar
Data Driven Maintenance | UReason WebinarData Driven Maintenance | UReason Webinar
Data Driven Maintenance | UReason Webinar
UReason
 
一比一原版(uoft毕业证书)加拿大多伦多大学毕业证如何办理
一比一原版(uoft毕业证书)加拿大多伦多大学毕业证如何办理一比一原版(uoft毕业证书)加拿大多伦多大学毕业证如何办理
一比一原版(uoft毕业证书)加拿大多伦多大学毕业证如何办理
sydezfe
 
1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf
1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf
1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf
MadhavJungKarki
 
TIME TABLE MANAGEMENT SYSTEM testing.pptx
TIME TABLE MANAGEMENT SYSTEM testing.pptxTIME TABLE MANAGEMENT SYSTEM testing.pptx
TIME TABLE MANAGEMENT SYSTEM testing.pptx
CVCSOfficial
 
2. protection of river banks and bed erosion protection works.ppt
2. protection of river banks and bed erosion protection works.ppt2. protection of river banks and bed erosion protection works.ppt
2. protection of river banks and bed erosion protection works.ppt
abdatawakjira
 
Open Channel Flow: fluid flow with a free surface
Open Channel Flow: fluid flow with a free surfaceOpen Channel Flow: fluid flow with a free surface
Open Channel Flow: fluid flow with a free surface
Indrajeet sahu
 
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...
PriyankaKilaniya
 
Blood finder application project report (1).pdf
Blood finder application project report (1).pdfBlood finder application project report (1).pdf
Blood finder application project report (1).pdf
Kamal Acharya
 
Software Engineering and Project Management - Software Testing + Agile Method...
Software Engineering and Project Management - Software Testing + Agile Method...Software Engineering and Project Management - Software Testing + Agile Method...
Software Engineering and Project Management - Software Testing + Agile Method...
Prakhyath Rai
 
Null Bangalore | Pentesters Approach to AWS IAM
Null Bangalore | Pentesters Approach to AWS IAMNull Bangalore | Pentesters Approach to AWS IAM
Null Bangalore | Pentesters Approach to AWS IAM
Divyanshu
 
Mechatronics material . Mechanical engineering
Mechatronics material . Mechanical engineeringMechatronics material . Mechanical engineering
Mechatronics material . Mechanical engineering
sachin chaurasia
 
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...
shadow0702a
 
4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf
4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf
4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf
Gino153088
 

Recently uploaded (20)

Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...
 
一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理
一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理
一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理
 
Generative AI Use cases applications solutions and implementation.pdf
Generative AI Use cases applications solutions and implementation.pdfGenerative AI Use cases applications solutions and implementation.pdf
Generative AI Use cases applications solutions and implementation.pdf
 
Pressure Relief valve used in flow line to release the over pressure at our d...
Pressure Relief valve used in flow line to release the over pressure at our d...Pressure Relief valve used in flow line to release the over pressure at our d...
Pressure Relief valve used in flow line to release the over pressure at our d...
 
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
 
Call For Paper -3rd International Conference on Artificial Intelligence Advan...
Call For Paper -3rd International Conference on Artificial Intelligence Advan...Call For Paper -3rd International Conference on Artificial Intelligence Advan...
Call For Paper -3rd International Conference on Artificial Intelligence Advan...
 
Digital Twins Computer Networking Paper Presentation.pptx
Digital Twins Computer Networking Paper Presentation.pptxDigital Twins Computer Networking Paper Presentation.pptx
Digital Twins Computer Networking Paper Presentation.pptx
 
Data Driven Maintenance | UReason Webinar
Data Driven Maintenance | UReason WebinarData Driven Maintenance | UReason Webinar
Data Driven Maintenance | UReason Webinar
 
一比一原版(uoft毕业证书)加拿大多伦多大学毕业证如何办理
一比一原版(uoft毕业证书)加拿大多伦多大学毕业证如何办理一比一原版(uoft毕业证书)加拿大多伦多大学毕业证如何办理
一比一原版(uoft毕业证书)加拿大多伦多大学毕业证如何办理
 
1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf
1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf
1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf
 
TIME TABLE MANAGEMENT SYSTEM testing.pptx
TIME TABLE MANAGEMENT SYSTEM testing.pptxTIME TABLE MANAGEMENT SYSTEM testing.pptx
TIME TABLE MANAGEMENT SYSTEM testing.pptx
 
2. protection of river banks and bed erosion protection works.ppt
2. protection of river banks and bed erosion protection works.ppt2. protection of river banks and bed erosion protection works.ppt
2. protection of river banks and bed erosion protection works.ppt
 
Open Channel Flow: fluid flow with a free surface
Open Channel Flow: fluid flow with a free surfaceOpen Channel Flow: fluid flow with a free surface
Open Channel Flow: fluid flow with a free surface
 
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...
 
Blood finder application project report (1).pdf
Blood finder application project report (1).pdfBlood finder application project report (1).pdf
Blood finder application project report (1).pdf
 
Software Engineering and Project Management - Software Testing + Agile Method...
Software Engineering and Project Management - Software Testing + Agile Method...Software Engineering and Project Management - Software Testing + Agile Method...
Software Engineering and Project Management - Software Testing + Agile Method...
 
Null Bangalore | Pentesters Approach to AWS IAM
Null Bangalore | Pentesters Approach to AWS IAMNull Bangalore | Pentesters Approach to AWS IAM
Null Bangalore | Pentesters Approach to AWS IAM
 
Mechatronics material . Mechanical engineering
Mechatronics material . Mechanical engineeringMechatronics material . Mechanical engineering
Mechatronics material . Mechanical engineering
 
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...
 
4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf
4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf
4. Mosca vol I -Fisica-Tipler-5ta-Edicion-Vol-1.pdf
 

Intelligent Agents, A discovery on How A Rational Agent Acts

  • 2. Why Rational Agent Approach? Because this concept of developing a smart set of design principles for building successful agents, systems that can reasonably be called intelligent, is Central to artificial intelligence.
  • 3.
  • 4.
  • 5. ● An agent perceives its environment through sensors and then takes action upon that environment through actuators, a mechanism that puts something into automatic action. ● A software agent receives keystrokes, file contents, And network packets as input and Then it acts on the environment by displaying output on the screen or sending network packets.
  • 6. ● The word ‘percept’ means Inputs that an agent receives through sensors at any instance of time. For example, a human agent has sense organs for perceiving percepts and hands, legs, vocal tract, and so on as actuators. Similarly, a robotic agent might have cameras and infrared range finders For sensing the environment and various motors as actuators. ● The percept sequence of an agent shows the history of everything that an agent has ever perceived and the percept sequence helps an agent in choosing an action at any given time.
  • 7. ● An agent’s function speaks its (or His) behavior. The function of an agent is a result of the mapping of a percept sequence to an action, and so a function can be called an abstract mathematical description. By tabulating all possible percept sequences and their corresponding actions, we can externally characterize the agent's behavior. ● An agent program is an internal characterization of an agent. This program will be a concrete implementation running on a physical system. ● For Example, The agent function for our cleaning robot is to clean a room. It perceives its environment through sensors, determines the dirtiness level, and takes actions to clean the room efficiently and an agent program might be a piece of software that runs on the robot's control system.
  • 8. ● When an agent performs a randomized action, it is considered very silly but it may be very intelligent. So, if we find that an agent uses some randomization to choose its action, then we would have to identify the probability of each action by trying each action many times. ● Before we close this session we should pay attention to the very notion that an agent is made to be a tool for analyzing systems. But in all areas of Engineering, an agent makes a non-trivial decision to display an action that it had perceived in history for a percept sequence. For example, a hand-held calculator shows “2+2=4”. This result “4” is the result of a history of percept sequence. ________
  • 9. Good Behavior: The Concept of Rationality 2
  • 10. ● An agent is one that does the right thing. To judge whether the task has been done rightly or not, we capture the sequence of actions that causes the environment to go through a sequence of states. If the sequence of states is desirable then this notion of desirability is the performance measure to evaluate the rationality of an agent. ● We don’t need to evaluate the agent’s state, we need to evaluate environment states. If we evaluate an agent’s states, then the agent can achieve rationality by deluding itself of its performance as its perfect performance. So, as a general rule, It is better to design performance measures according to what one actually wants in the environment rather than, according to how one thinks the agent should behave. 2.1 Rationality Rational at any given time depends : • The performance measure that defines the criterion of success. • The agent’s prior knowledge of the environment. • The actions that the agent can perform. • The agent’s percept sequence to date.
  • 11. ● So, for each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
  • 12. 2.2 Omniscience, Learning, and Autonomy ● An omniscient agent knows the actual outcome of its actions and can act accordingly. But in reality, Omniscience is impossible. ● Rationality maximizes expected performance while perfection maximizes actual performance. ● The point is that we expect an agent to do what turns out to be the best action after the fact.But unless we improve agent’s performance, it will be impossible to design an agent to fulfill the specification.
  • 13. ● Doing actions in order to modify future percepts, sometimes called information gathering, is an important part of rationality. ● Exploration of an environment is another way for information gathering. ● Some agents may initially have prior knowledge, but they must adapt and modify their understanding based on experience. So, a rational agent shouldn't just gather knowledge but also learn as much as possible from what it perceives
  • 14. ● If an agent relies on the prior knowledge of its designer rather than on its own percepts, we can say that the agent lacks autonomy. ● An autonomous agent learns to compensate for its partial or incorrect prior knowledge. So, a rational agent should be autonomous. ● After a sufficient experience of its environment, the behavior of a rational agent can become effectively independent of its prior knowledge. In other words, this independency will make a rational agent autonomous. _________
  • 16. ● The nature of environment for an agent is described by “Specifying the Task Environment” and “ Properties of Task Environments”. 3.1 Specifying task environment ● To specify the task environment of an agent, we need to specify PEAS( Performance measure, Environment, Actuators, and Sensors).
  • 17. ● Here’s an instance of specification of the environment of a self-driving car. ● Performance measures for the above example include getting to the correct destination; minimizing fuel consumption and wear and tear; minimizing the trip time or cost; minimizing violations of traffic laws and disturbances to other drivers; maximizing safety and passenger comfort; and maximizing profits.
  • 18. ● The environment includes a variety of roads, ranging from rural lanes and urban alleys to 12-lane freeways. The roads contain other traffic, pedestrians, stray animals, road works, police cars, puddles, and potholes. The car may also need to interact with potential and actual passenger ● The actuators for an automated taxi include those available to a human driver: control over the engine through the accelerator, control over steering and braking, and a display screen or voice synthesizer to communicate to the passengers. ● The basic sensors for the taxi will include one or more controllable video cameras so that it can see the road, a speedometer to control the vehicle properly, especially on curves, GPS, global positioning system, so that it doesn’t get lost, and a keyboard or microphone for the passenger to request a destination.
  • 19. ● Softbot or Software agent: a "softbot" refers to a software agent or robot designed to operate on the Internet. - Example: a softbot Web site operator designed to scan Internet news sources and show interesting items to its users, while selling advertising space to generate revenue. - The characteristics attributed to this softbot include the need for natural language processing abilities, the ability to learn user and advertiser preferences, and the capacity to dynamically adjust plans in response to changes, such as the connection status of news sources. - a softbot in this context is an intelligent software agent tailored for web operations, dealing with the complexity of the online environment and interacting with both artificial and human agents.
  • 20. 3.2 Properties of task environment ● Fully observable: an agent’s sensors give it access to the complete state of the environment at each point in time. An environment might be partially observable because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data. If the agent has no sensors at all then the environment is Unobservable. ● Single-agent and multiagent: an agent solving a crossword puzzle by itself is clearly in a single-agent environment, whereas an agent playing chess is in a two-agent environment ● Chess is a competitive multi agent environment. In the taxi-driving environment, on the other hand, avoiding collisions maximizes the performance measure of all agents, so it is a partially cooperative multiagent environment. ● Deterministic vs. stochastic: If the next state of the environment is completely determined by the current state and the action executed by the agent, then we say the environment is deterministic; otherwise, it is stochastic. "Stochastic" refers to a process or phenomenon that involves a random element or probability.
  • 21. ● We say an environment is uncertain if it is not fully observable or not deterministic. ● A non-deterministic environment is one in which actions are characterized by their possible outcomes, but no probabilities are attached to them. Nondeterministic environment descriptions are usually associated with performance measures that require the agent to succeed for all possible outcomes of its actions. ● In an episodic task environment, the agent’s experience is divided into atomic episodes. In each episode, the agent receives a percept and then performs a single action. Crucially, the next episode does not depend on the actions taken in previous episodes.For example, an agent that has to spot defective parts on an assembly line bases each decision on the current part, regardless of previous decisions; moreover, the current decision doesn’t affect whether the next part is defective. ● In sequential environments, on the other hand, the current decision could affect all future decisions.3 Chess and taxi driving are sequential: in both cases, short-term actions can have long-term consequences.
  • 22. ● If the environment can change while an agent is deliberating, then we say the environment is dynamic for that agent; otherwise, it is static. Static environments are easy to deal with because the agent need not keep looking at the world while it is deciding on an action, nor need it worry about the passage of time. Crossword puzzles are static. ● If the environment itself does not change with the passage of time but the agent’s performance score does, then we say the environment is semi-dynamic. Chess, when played with a clock, is semi-dynamic. ● The state of knowledge of an agent determines whether it is in a known environment or an unknown environment. In a known environment, the outcomes (or outcome probabilities if the environment is stochastic) for all actions are given. If the environment is unknown, the agent will have to learn how it works in order to make good decisions. ● The hardest cases are partially observable, multiagent, stochastic, sequential, dynamic, continuous, and unknown.
  • 23. Here are examples of task environments and their characteristics:
  • 24. ● One or more agent(s) are placed in a simulated environment and their behavior is observed over time and their evolution is done according to a given performance measure. Such experiments are often carried out not for a single environment but for many environments drawn from an environment class. ● For example, to evaluate a taxi driver in simulated traffic, we would want to run many simulations with different traffic, lighting, and weather conditions. ● An environment generator for each environment class that selects particular environments (with certain likelihoods) in which to run the agent. _______________________
  • 25. How the component of an agent program work ? ● The component of the environment of an agent can be expressed on an axis of increasing complexity and increasing expressiveness- Atomic, Factored, and structure. ● In an atomic representation, each state of the world is indivisible. For example, from going one end to another end of a country, you can break down the route with the name of cities you will cross while going from one end to another end.The algorithms ‘searching’, game-playing, Hidden Markov Model, and Markov Decision model work on atomic representation. ● In factored representation, we need to do consider more than just one atomic city.For example, We need to consider the amount of gas in tank, our current GPS coordinates, whether or not the oil warning light is working. A factored representation breaks down each state in attributes or variable, each of which can have some value. ● The areas of AI based on factored representation include constraint satisfaction algorithm, propositional logic, planning, Bayesian Network, and machine learning algorithm. ● Sometimes we need things related to each other, not just variables and values. For example, while driving a car from one end to another their might a situation when at a turn a cow stops car from moving ahead. What can we do in that situation?
  • 26. ● Structured representation underlie RDB, first-order logic, first order probability model, knowledge based reasoning, natural language understanding. ● In fact, everything that we human concerns is expressed in objects and their relationship ● As it can be seen, as we move forward from atomic to structured representation, the expressiveness of components of an environment increases. ● To gain the benefits from expressive representations while avoiding their drawback, an intelligent system for real world may need to operate all points of the axis simultaneously. _______________________
  • 28. ● The job of AI is to design an agent program that implements the agent function— the mapping from percepts to actions. We assume this program will run on some sort of computing device with physical sensors and actuators—we call this the architecture: Architecture Agent Program ● The difference between the agent program, which takes the current percept as input, and the agent function, which takes the entire percept history ● Four basic kinds of agent programs: ○ Simple reflex agents ○ Model-based reflex agents ○ Goal-based agents ○ Utility-based agents.
  • 29. ● The simplest kind of agent is the simple reflex agent. This agent selects actions on the basis of the current percept, ignoring the rest of percept history. For example, if a self-driving car locates an object in front of it it will decide an action based on that current situation. The self-driving car decides to initiate breaking when it finds an object in front. This triggers some established connection in the agent program to the action “initiate breaking”. we call such a connection or condition-action rule
  • 30. ● Simple reflex agents do have admirable properties but they turn out to be of limited intelligence. When they observe the environment partially, they get stuck in an infinite loop of actions, then the agent can randomize its actions to escape from an infinite loop. ● The best way to handle partial observability for the agent is to keep track of the part of the world it cannot see now. That is, the agent should maintain some sort of internal state that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. ● When the information about the world that the agent cannot see or the information about how the agent’s actions affect the world or the information about the world evolves independently of the agent is implemented in a simple boolean circuit or incomplete scientific theory is called a model of the world. An agent that uses such a model is called a model-based agent
  • 31.
  • 32. ● Knowing something about the current state of the environment is not always enough to decide what to do. For example, at a road junction, the taxi can turn left, turn right, or go straight on. The correct decision depends on where the taxi is trying to get to. In other words, it can be said now the agent needs some sort of goal information. Once the agent gets goal information, it combines this information with the model to choose actions that achieve the goal. Now, the agent is a goal- based agent.
  • 33. ● Goals alone are not enough to generate high-quality behavior in most environments. For example, many action sequences will get the taxi to its destination (thereby achieving the goal) but some are quicker, safer, more reliable, or cheaper than others. This is achieved by Utility-based agents.
  • 34. ● An agent’s utility function is essentially an internalization of the performance measure. ● Technically speaking, a rational utility-based agent chooses the action that maximizes the expected utility of the action outcomes ● We discovered how an agent program selects action with various methods. But we must also know how the agent program came into being. ● Turin gave a method that proposes to build learning machines and then to teach them. ● A learning agent can be divided into four conceptual components, as shown below.
  • 35.
  • 36. ● The learning element is responsible for making improvements and the performance element is responsible for selecting external actions. The learning element uses feedback from the critic on how the agent is doing and determines how the performance element should be modified to do better in the future. ● The last component of the learning agent is the problem generator. It is responsible for suggesting actions that will lead to new and informative experiences. ____________
  • 37.
  • 38. • An agent is something that perceives and acts in an environment. The agent function for an agent specifies the action taken by the agent in response to any percept sequence. • The performance measure evaluates the behavior of the agent in an environment. A rational agent acts so as to maximize the expected value of the performance measure, given the percept sequence it has seen so far. • A task environment specification includes the performance measure, the external environment, the actuators, and the sensors. In designing an agent, the first step must always be to specify the task environment as fully as possible. • Task environments vary along several significant dimensions. They can be fully or partially observable, single-agent or multiagent, deterministic or stochastic, episodic or sequential, static or dynamic, discrete or continuous, and known or unknown.
  • 39. • The agent program implements the agent function. There exists a variety of basic agent-program designs reflecting the kind of information made explicit and used in the decision process. The designs vary in efficiency, compactness, and flexibility. The appropriate design of the agent program depends on the nature of the environment. • Simple reflex agents respond directly to percepts, whereas model-based reflex agents maintain internal state to track aspects of the world that are not evident in the current percept. Goal-based agents act to achieve their goals, and utility- based agents try to maximize their own expected “happiness” . • All agents can improve their performance through learning. _____________

Editor's Notes

  1. In the book we are going to follow rational agent approach so before women forward we need to know what is rational agent approach?
  2. Agents interact with environments through sensors and actuators
  3. Talk about sensors, actuators, environment, percept, percept sequence.
  4. Talk about sensors, actuators, environment, percept, percept sequence.
  5. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  6. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  7. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  8. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  9. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  10. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  11. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  12. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  13. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  14. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  15. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  16. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  17. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  18. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  19. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  20. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  21. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  22. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  23. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  24. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  25. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  26. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  27. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  28. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  29. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  30. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  31. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  32. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  33. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  34. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  35. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif
  36. https://miro.medium.com/v2/resize:fit:572/1*uzVtquVT0jGfbNAf7Qc1UQ.gif