This document discusses intelligent agents and their design. It begins by defining an agent as anything that can perceive its environment and act upon it. It then describes different types of agents including human agents, robotic agents, and software agents. It introduces the concepts of percepts, actions, and agent functions. It also discusses rational agents and the requirements for rational behavior. Finally, it covers different aspects of agent design including performance measures, environments, actuators, sensors (PEAS), environment types, and the four basic types of agents from simple reflex agents to utility-based agents.
This document discusses different types of intelligent agents. It describes four basic types of agent programs: simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. Simple reflex agents select actions based only on the current percept, while model-based reflex agents maintain an internal model of the world. Goal-based agents use goals to determine desirable situations. Utility-based agents maximize an internal utility function that represents the performance measure. The document also discusses agent functions, percepts, environments, and the PEAS properties of task environments.
This document discusses intelligent agents and their environments. It defines agents as things that perceive their environment and act on it. Rational agents are those that select actions expected to maximize their performance measure given what they perceive. The PEAS framework describes an agent's Performance measure, Environment, Actuators, and Sensors. Environment types include fully/partially observable, deterministic/stochastic, and more. Agent types range from simple reflex agents to more complex goal-based and utility-based agents.
This document provides an overview of intelligent agents, including:
1) It defines agents and environments, and describes how agents perceive their environment through sensors and act through actuators. Examples of human and robotic agents are given.
2) It introduces the concept of rational agents and the PEAS framework for defining an agent's Performance measure, Environment, Actuators, and Sensors. Several examples of applying PEAS are described.
3) It describes different types of environments an agent may operate in, including fully/partially observable, deterministic/stochastic, and discrete/continuous.
The document defines key concepts in artificial intelligence including intelligent agents, environments, and rational agents. An intelligent agent is anything that can perceive its environment and take actions to achieve its goals. Rational agents aim to maximize their performance measure given their percepts and knowledge. Different types of agents are described including reflex agents, model-based agents, goal-based agents, and utility-based agents. State representations and learning agents are also covered at a high level.
The document discusses different types of intelligent agents and their characteristics. It defines an agent as anything that can perceive its environment and act upon it. Example agent types include human agents, robotic agents, and software agents. The document also discusses windshield wiper agents as an example and covers agent terminology such as goals, percepts, sensors, effectors, and actions. Later sections discuss rational agents and how they are designed to maximize their performance based on their percept sequences and knowledge. Different types of agents are introduced, including simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. The document also covers properties of task environments and the structure of agents.
Intelligent agents are anything that perceives its environment through sensors and acts to achieve goals. They can be described using the PAGE framework of percepts, actions, goals, and environment. Rational agents choose actions that are expected to maximize performance given past experiences. Different agent types include reflex, state-based, goal-based, utility-based, and learning agents.
This document discusses different types of intelligent agents. It describes four basic types of agent programs: simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. Simple reflex agents select actions based only on the current percept, while model-based reflex agents maintain an internal model of the world. Goal-based agents use goals to determine desirable situations. Utility-based agents maximize an internal utility function that represents the performance measure. The document also discusses agent functions, percepts, environments, and the PEAS properties of task environments.
This document discusses intelligent agents and their environments. It defines agents as things that perceive their environment and act on it. Rational agents are those that select actions expected to maximize their performance measure given what they perceive. The PEAS framework describes an agent's Performance measure, Environment, Actuators, and Sensors. Environment types include fully/partially observable, deterministic/stochastic, and more. Agent types range from simple reflex agents to more complex goal-based and utility-based agents.
This document provides an overview of intelligent agents, including:
1) It defines agents and environments, and describes how agents perceive their environment through sensors and act through actuators. Examples of human and robotic agents are given.
2) It introduces the concept of rational agents and the PEAS framework for defining an agent's Performance measure, Environment, Actuators, and Sensors. Several examples of applying PEAS are described.
3) It describes different types of environments an agent may operate in, including fully/partially observable, deterministic/stochastic, and discrete/continuous.
The document defines key concepts in artificial intelligence including intelligent agents, environments, and rational agents. An intelligent agent is anything that can perceive its environment and take actions to achieve its goals. Rational agents aim to maximize their performance measure given their percepts and knowledge. Different types of agents are described including reflex agents, model-based agents, goal-based agents, and utility-based agents. State representations and learning agents are also covered at a high level.
The document discusses different types of intelligent agents and their characteristics. It defines an agent as anything that can perceive its environment and act upon it. Example agent types include human agents, robotic agents, and software agents. The document also discusses windshield wiper agents as an example and covers agent terminology such as goals, percepts, sensors, effectors, and actions. Later sections discuss rational agents and how they are designed to maximize their performance based on their percept sequences and knowledge. Different types of agents are introduced, including simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. The document also covers properties of task environments and the structure of agents.
Intelligent agents are anything that perceives its environment through sensors and acts to achieve goals. They can be described using the PAGE framework of percepts, actions, goals, and environment. Rational agents choose actions that are expected to maximize performance given past experiences. Different agent types include reflex, state-based, goal-based, utility-based, and learning agents.
1) An intelligent agent is anything that perceives its environment through sensors and acts upon the environment through effectors. Agents can be humans, robots, software programs, etc.
2) The PEAS framework is used to specify the setting for designing an intelligent agent by defining the Performance measure, Environment, Actuators, and Sensors.
3) There are different types of environments including fully/partially observable, deterministic/stochastic, and static/dynamic environments that impact agent design.
The document provides an overview of artificial intelligence (AI) and intelligent agents. It defines AI as the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence. An intelligent agent is described as anything that perceives its environment, takes actions autonomously to achieve goals, and may improve its performance through learning or using knowledge. The key components of an intelligent agent are described as its architecture, agent function that maps perceptions to actions, and agent program that implements the function. Different types of agents are discussed including simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents.
An intelligent agent is an autonomous entity that perceives its environment and takes actions to maximize its chances of successfully achieving its goals. It has sensors to observe the environment and actuators to perform actions. A rational intelligent agent selects actions that are expected to be most useful based on its past experiences and built-in knowledge. Specifying the task environment through PEAS - performance measure, environment, actuators, and sensors - helps define the problem an intelligent agent aims to solve.
An agent is anything that perceives its environment through sensors and acts upon it through actuators. A rational agent aims to maximize its performance measure by selecting actions expected to have the best outcome, given its percepts and built-in knowledge. To design a rational agent, its task environment must be specified using the PEAS framework of Performance measure, Environment, Actuators, and Sensors. There are four main types of agents: simple reflex agents that react solely based on current percepts; model-based reflex agents that also consider past states; goal-based agents that take future goals into account; and utility-based agents that choose actions to maximize expected utility or happiness.
An intelligent agent perceives its environment through sensors, acts upon the environment through effectors or actuators, and aims to maximize a performance measure in its environment. The document discusses different types of agent programs including table-driven agents, reflex agents, model-based reflex agents, goal-based agents, and learning agents. It also examines properties of task environments such as being fully or partially observable, deterministic or stochastic, episodic or sequential, static or dynamic, discrete or continuous, single-agent or multi-agent.
This document discusses intelligent agents and their environments. It defines an agent as anything that perceives its environment and acts upon it. Rational agents are those that select actions expected to maximize their performance based on perceptions. An agent's task environment consists of its performance measure, environment, actuators, and sensors. Environment types include fully/partially observable, deterministic/stochastic, and single/multi-agent. Four basic agent types are described: simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. Learning agents use feedback to improve performance over time. The document provides examples of agents and discusses their design considerations based on task environment properties.
An AI assistant summarizes the key points about the Turing Test from the document:
1) The Turing Test proposes that a computer can be considered intelligent if an interrogator cannot distinguish it from a human via conversation.
2) Notable chatbots that have attempted the Turing Test include ELIZA, Parry, and Eugene Goostman. Eugene Goostman convinced 29% of judges it was human.
3) Critics argue that passing the Turing Test does not prove a machine has human-level understanding, as it can mimic responses without true comprehension.
An agent can be anything that perceives its environment and acts upon it. There are three main types of agents: human agents that use senses and limbs, robotic agents that use cameras/sensors and motors, and software agents that use inputs like keystrokes and display outputs. An agent operates in a cycle of perceiving, thinking, and acting. Sensors detect environmental changes and actuators allow the agent to act. Intelligent agents autonomously achieve goals using sensors and actuators. Rational agents perform optimally to maximize their performance measure. The PEAS model defines an agent's performance criteria, environment, actuators, and sensors. Learning agents improve through experience by incorporating a learning element, critic, performance element, and problem
1) Intelligent agents are systems that perceive their environment and act upon it. They can be designed to act or think rationally or humanly.
2) An agent is anything that can perceive its environment through sensors and act upon the environment through effectors. Agents perceive the environment via sensors and act with effectors, mapping percept sequences to actions.
3) Key properties of intelligent agents include autonomy, reactivity, proactiveness, balancing reactive and goal-oriented behavior, and social ability. Agents must be able to operate independently, respond to changes, pursue goals, and interact with other agents.
The document discusses different types of intelligent agents. It defines agents as anything that can perceive its environment and act upon that environment. It provides examples of human, robotic, and software agents. The document also discusses rational agents and how they should strive to maximize their performance based on their perceptions and available actions. Finally, it outlines different types of environments that agents may operate in, including fully/partially observable, deterministic/stochastic, and single/multi-agent environments.
This document discusses intelligent agents and their characteristics. It defines an intelligent agent as an autonomous entity that can perceive its environment through sensors and act upon the environment through effectors to achieve goals. Intelligent agents must be able to perceive, make decisions based on perceptions, take actions as a result of decisions, and take rational actions. The document also describes different types of agents including simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents. Finally, it discusses environments that agents can operate in based on characteristics like observability, predictability, event structure, existence of other agents, consistency, certainty, and accessibility.
An intelligent agent perceives its environment via sensors and acts upon that environment with its effectors.
A discrete agent receives percepts one at a time, and maps this percept sequence to a sequence of discrete actions.
Properties
Autonomous
Reactive to the environment
Pro-active (goal-directed)
Interacts with other agents
via the environment
Humans
Sensors: Eyes (vision), ears (hearing), skin (touch), tongue (gustation), nose (olfaction), neuromuscular system (proprioception)
Percepts:
At the lowest level – electrical signals from these sensors
After preprocessing – objects in the visual field (location, textures, colors, …), auditory streams (pitch, loudness, direction), …
Effectors: limbs, digits, eyes, tongue, …
Actions: lift a finger, turn left, walk, run, carry an object, …
The Point: percepts and actions need to be carefully defined, possibly at different levels of abstraction
An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators
Operates in an environment
Perceive its environment through sensors
Acts upon its environment through actuators/ effectors
Has Goals
This document discusses intelligent agents and their components. It defines an agent as anything that perceives its environment and acts upon it. Rational agents are designed to maximize their performance based on perceptions. When designing agents, their Performance measure, Environment, Actuators, and Sensors (PEAS) should be considered. Different types of environments and agents are discussed, including reflex, model-based, goal-based, and utility-based agents. Learning agents can adapt their actions based on feedback.
This document discusses intelligent agents and their components. It defines an agent as anything that perceives its environment and acts upon it. Rational agents are designed to maximize their performance based on perceptions. When designing agents, their Performance measure, Environment, Actuators, and Sensors (PEAS) should be considered. Different types of environments and agents are discussed, including reflex, model-based, goal-based, and utility-based agents. Learning agents can adapt their actions based on feedback.
Artificial Intelligence and Machine Learning.pptxMANIPRADEEPS1
Artificial intelligence and machine learning agents can be categorized based on their architecture, characteristics, and type. The document discusses several types of agents including simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, learning agents, multi-agent systems, and hierarchical agents. It also covers reasoning methods like forward chaining and backward chaining.
The document provides an overview of foundations of artificial intelligence including definitions of AI, different views of AI, and capabilities required for a computer to pass a total Turing test. It discusses thinking humanly by modeling human cognition and thinking rationally using logic. It describes acting humanly using the Turing test and acting rationally by being a rational agent that takes actions to achieve goals. The document also covers applications of AI, types of environments and agents, and different approaches to machine learning including reflex agents, model-based agents, goal-based agents, and utility-based agents.
Intelligent Agents, A discovery on How A Rational Agent ActsSheetal Jain
Because this concept of developing a smart set of design principles for building successful agents, systems that can reasonably be called intelligent, is Central to artificial intelligence we need to know its thinking and action approach. This PPT covers this topic in detail.
Go and take a look and share your suggestions with me.
1) An intelligent agent is anything that perceives its environment through sensors and acts upon the environment through effectors. Agents can be humans, robots, software programs, etc.
2) The PEAS framework is used to specify the setting for designing an intelligent agent by defining the Performance measure, Environment, Actuators, and Sensors.
3) There are different types of environments including fully/partially observable, deterministic/stochastic, and static/dynamic environments that impact agent design.
The document provides an overview of artificial intelligence (AI) and intelligent agents. It defines AI as the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence. An intelligent agent is described as anything that perceives its environment, takes actions autonomously to achieve goals, and may improve its performance through learning or using knowledge. The key components of an intelligent agent are described as its architecture, agent function that maps perceptions to actions, and agent program that implements the function. Different types of agents are discussed including simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents.
An intelligent agent is an autonomous entity that perceives its environment and takes actions to maximize its chances of successfully achieving its goals. It has sensors to observe the environment and actuators to perform actions. A rational intelligent agent selects actions that are expected to be most useful based on its past experiences and built-in knowledge. Specifying the task environment through PEAS - performance measure, environment, actuators, and sensors - helps define the problem an intelligent agent aims to solve.
An agent is anything that perceives its environment through sensors and acts upon it through actuators. A rational agent aims to maximize its performance measure by selecting actions expected to have the best outcome, given its percepts and built-in knowledge. To design a rational agent, its task environment must be specified using the PEAS framework of Performance measure, Environment, Actuators, and Sensors. There are four main types of agents: simple reflex agents that react solely based on current percepts; model-based reflex agents that also consider past states; goal-based agents that take future goals into account; and utility-based agents that choose actions to maximize expected utility or happiness.
An intelligent agent perceives its environment through sensors, acts upon the environment through effectors or actuators, and aims to maximize a performance measure in its environment. The document discusses different types of agent programs including table-driven agents, reflex agents, model-based reflex agents, goal-based agents, and learning agents. It also examines properties of task environments such as being fully or partially observable, deterministic or stochastic, episodic or sequential, static or dynamic, discrete or continuous, single-agent or multi-agent.
This document discusses intelligent agents and their environments. It defines an agent as anything that perceives its environment and acts upon it. Rational agents are those that select actions expected to maximize their performance based on perceptions. An agent's task environment consists of its performance measure, environment, actuators, and sensors. Environment types include fully/partially observable, deterministic/stochastic, and single/multi-agent. Four basic agent types are described: simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. Learning agents use feedback to improve performance over time. The document provides examples of agents and discusses their design considerations based on task environment properties.
An AI assistant summarizes the key points about the Turing Test from the document:
1) The Turing Test proposes that a computer can be considered intelligent if an interrogator cannot distinguish it from a human via conversation.
2) Notable chatbots that have attempted the Turing Test include ELIZA, Parry, and Eugene Goostman. Eugene Goostman convinced 29% of judges it was human.
3) Critics argue that passing the Turing Test does not prove a machine has human-level understanding, as it can mimic responses without true comprehension.
An agent can be anything that perceives its environment and acts upon it. There are three main types of agents: human agents that use senses and limbs, robotic agents that use cameras/sensors and motors, and software agents that use inputs like keystrokes and display outputs. An agent operates in a cycle of perceiving, thinking, and acting. Sensors detect environmental changes and actuators allow the agent to act. Intelligent agents autonomously achieve goals using sensors and actuators. Rational agents perform optimally to maximize their performance measure. The PEAS model defines an agent's performance criteria, environment, actuators, and sensors. Learning agents improve through experience by incorporating a learning element, critic, performance element, and problem
1) Intelligent agents are systems that perceive their environment and act upon it. They can be designed to act or think rationally or humanly.
2) An agent is anything that can perceive its environment through sensors and act upon the environment through effectors. Agents perceive the environment via sensors and act with effectors, mapping percept sequences to actions.
3) Key properties of intelligent agents include autonomy, reactivity, proactiveness, balancing reactive and goal-oriented behavior, and social ability. Agents must be able to operate independently, respond to changes, pursue goals, and interact with other agents.
The document discusses different types of intelligent agents. It defines agents as anything that can perceive its environment and act upon that environment. It provides examples of human, robotic, and software agents. The document also discusses rational agents and how they should strive to maximize their performance based on their perceptions and available actions. Finally, it outlines different types of environments that agents may operate in, including fully/partially observable, deterministic/stochastic, and single/multi-agent environments.
This document discusses intelligent agents and their characteristics. It defines an intelligent agent as an autonomous entity that can perceive its environment through sensors and act upon the environment through effectors to achieve goals. Intelligent agents must be able to perceive, make decisions based on perceptions, take actions as a result of decisions, and take rational actions. The document also describes different types of agents including simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents. Finally, it discusses environments that agents can operate in based on characteristics like observability, predictability, event structure, existence of other agents, consistency, certainty, and accessibility.
An intelligent agent perceives its environment via sensors and acts upon that environment with its effectors.
A discrete agent receives percepts one at a time, and maps this percept sequence to a sequence of discrete actions.
Properties
Autonomous
Reactive to the environment
Pro-active (goal-directed)
Interacts with other agents
via the environment
Humans
Sensors: Eyes (vision), ears (hearing), skin (touch), tongue (gustation), nose (olfaction), neuromuscular system (proprioception)
Percepts:
At the lowest level – electrical signals from these sensors
After preprocessing – objects in the visual field (location, textures, colors, …), auditory streams (pitch, loudness, direction), …
Effectors: limbs, digits, eyes, tongue, …
Actions: lift a finger, turn left, walk, run, carry an object, …
The Point: percepts and actions need to be carefully defined, possibly at different levels of abstraction
An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators
Operates in an environment
Perceive its environment through sensors
Acts upon its environment through actuators/ effectors
Has Goals
This document discusses intelligent agents and their components. It defines an agent as anything that perceives its environment and acts upon it. Rational agents are designed to maximize their performance based on perceptions. When designing agents, their Performance measure, Environment, Actuators, and Sensors (PEAS) should be considered. Different types of environments and agents are discussed, including reflex, model-based, goal-based, and utility-based agents. Learning agents can adapt their actions based on feedback.
This document discusses intelligent agents and their components. It defines an agent as anything that perceives its environment and acts upon it. Rational agents are designed to maximize their performance based on perceptions. When designing agents, their Performance measure, Environment, Actuators, and Sensors (PEAS) should be considered. Different types of environments and agents are discussed, including reflex, model-based, goal-based, and utility-based agents. Learning agents can adapt their actions based on feedback.
Artificial Intelligence and Machine Learning.pptxMANIPRADEEPS1
Artificial intelligence and machine learning agents can be categorized based on their architecture, characteristics, and type. The document discusses several types of agents including simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, learning agents, multi-agent systems, and hierarchical agents. It also covers reasoning methods like forward chaining and backward chaining.
The document provides an overview of foundations of artificial intelligence including definitions of AI, different views of AI, and capabilities required for a computer to pass a total Turing test. It discusses thinking humanly by modeling human cognition and thinking rationally using logic. It describes acting humanly using the Turing test and acting rationally by being a rational agent that takes actions to achieve goals. The document also covers applications of AI, types of environments and agents, and different approaches to machine learning including reflex agents, model-based agents, goal-based agents, and utility-based agents.
Intelligent Agents, A discovery on How A Rational Agent ActsSheetal Jain
Because this concept of developing a smart set of design principles for building successful agents, systems that can reasonably be called intelligent, is Central to artificial intelligence we need to know its thinking and action approach. This PPT covers this topic in detail.
Go and take a look and share your suggestions with me.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
3. Agents
• An agent is anything that can be viewed as
perceiving its environment through sensors and
acting upon that environment through actuators.
• Human agent: eyes, ears, and other organs for
sensors; hands, legs, mouth, and other body
parts for actuators.
• Robotic agent: cameras and infrared range
finders for sensors; various motors for actuators.
4. Agents
• Software agent ( surfBot : software robot ) :-
receives keystrokes, file contents, network
packets etc as sensory input.
acts on the environment by displaying on
the screen, writing files, sending message
packets.
5. Agent perception and action
• Percept :- refers to an agents perceptual input at
any given instant.
• Percept sequence :- refers to the complete
history of what an agent has perceived so far.
• Action:- an agent’s choice of action at any given
instant can depend on the entire percept
sequence observed till that time.
• Agent function:- an agent’s behavior is
described by the agent function that maps any
percept sequence to an action.
6. Agents and environments
• The agent function maps from percept histories
to actions: [f: P* A]
• The agent function is implemented by an agent
program.
• The agent program runs on the physical
architecture to produce f
• agent = architecture + program
7. Vacuum-cleaner world
• Percepts: location and contents , e.g.,[ A, Dirty]
• Actions: Left, Right, Suck, NoOp
• percept sequence : Action :
[A, Clean] right
[A, Dirty] suck
[B, Clean] left
[B, dirty] suck
[A, Dirty],[A, Clean] right
……………
8. Rational agents
• An agent should strive to "do the right thing",
based on what it can perceive and the actions it
can perform. The right action is the one that will
cause the agent to be most successful.
• Performance measure: An objective criterion for
success of an agent's behavior.
• E.g., performance measure of a vacuum-cleaner
agent could be the amount of dirt cleaned up,
the amount of time taken, the amount of
electricity consumed, the amount of noise
generated, etc .
9. RATIONALITY
• Rationality at any given time depends on :
–Agent’s prior knowledge of the
environment.
–Actions that the agent can perform.
–Agent’s percept sequence till that
moment.
–The performance measure that defines
the criterion of success.
10. Rational agents
• Rational Agent: For each possible percept
sequence, a rational agent should select
an action that is expected to maximize its
performance measure, given the evidence
provided by the percept sequence and
whatever built-in knowledge the agent has.
11. Requirements for a Rational agent
• Rationality ; which is distinct from omniscience
(all-knowing with infinite knowledge)
• Information gathering capability ; Agents can
perform actions in order to modify future
percepts so as to obtain useful information.
• Learning ability.
• Ability to adapt.
• An agent is autonomous if its behavior is
determined by its own experience (with ability to
learn and adapt)
12. Building intelligent agents
• An agent is completely specified by the
agent function which maps percept
sequences to action.
• An agent has some internal data structures
that is updated as and when new percepts
arrive.
• The data structures are operated on by the
agent’s decision making procedures to
generate an action choice, which is then
passed to the architecture to get executed.
13. Building intelligent agents
• An agent consists of an architecture plus a
program that runs on that architecture.
• In designing intelligent systems , there are
4 main factors to consider:-
– Percepts:- the inputs to our system.
– Actions:- the outputs of our system.
– Goals:- what the agent is expected to achieve.
– Environment:- what the agent is interacting
with.
14. PEAS
• Task environments are problems to which
rational agents are the solutions. The task
environment is specified by PEAS.
• PEAS stands for :- Performance measure,
Environment, Actuators, Sensors.
• PEAS specifies the settings for an
intelligent agent design.
17. PEAS
• Agent: Part-picking robot
• Performance measure: Percentage of
parts in correct bins
• Environment: Conveyor belt with parts,
bins
• Actuators: Jointed arm and hand
• Sensors: Camera, joint angle sensors
18. PEAS
• Agent: Interactive English tutor
• Performance measure: Maximize student's
score on test
• Environment: Set of students
• Actuators: Screen display (exercises,
suggestions, corrections)
• Sensors: Keyboard
19. Environment types
Fully observable (vs. partially observable): An agent's
sensors give it access to the complete state of the
environment at each point in time, e.g., chess game.
Fully observable vs. Partially observable
If an agent’s sensors give it access to the complete state
of the environment at each point in time then the
environment is effectively and fully observable
if the sensors detect all aspects
That are relevant to the choice of action
20. Environment Types
Partially observable
An environment might be Partially
observable because of noisy and
inaccurate sensors or because parts of the
state are simply missing from the sensor
data.
Example:
A local dirt sensor of the cleaner cannot
tell Whether other squares are clean or
not
21. Environment Types
Deterministic vs. stochastic
next state of the environment Completely determined
by the current state and the actions executed by the
agent, then the environment is deterministic,
otherwise, it is Stochastic.
Strategic environment: deterministic except for
actions of other agents
-Cleaner and taxi driver are:
Stochastic because of some unobservable aspects
noise or unknown
22. Environment Types
• Episodic (vs. sequential): The agent's
experience is divided into atomic
"episodes" (each episode consists of the
agent perceiving and then performing a
single action), and the choice of action in
each episode depends only on the
episode itself, e.g., mail sorting system,
finding defective points in a line.
23. Environment types
• Static (vs. dynamic): The environment is
unchanged while an agent is deliberating. (The
environment is semi dynamic if the environment
itself does not change with the passage of time
but the agent's performance score does)
• Discrete (vs. continuous): A limited number of
distinct, clearly defined percepts and actions.
• Single agent (vs. multiagent): An agent
operating by itself in an environment.
24. Environment types
Chess with Chess without Taxi driving
a clock a clock
Fully observable Yes Yes No
Deterministic Strategic Strategic No
Episodic No No No
Static Semi Yes No
Discrete Yes Yes No
Single agent No No No
• The environment type largely determines the agent design
•
• The real world is (of course) partially observable, stochastic,
sequential, dynamic, continuous, multi-agent
•
25. Agent functions and programs
• An agent is completely specified by the
agent function mapping percept
sequences to actions.
• One agent function (or a small
equivalence class) is rational.
• Aim: find a way to implement the rational
agent function concisely.
26. Agent types
Four basic types of agents in order of
increasing sophistication:
– Simple reflex agents
– Model-based reflex agents
– Goal-based agents
– Utility-based agents
27. • It works by finding a rule whose condition
matches the current situation (as defined by the
percept) and then doing the action associated
with that rule.
• A set of condition ~ action rules / situation ~
action rules / productions / if~ then rules are
defined.
e.g if the car – in - front is braking
then initiate – braking
• This agent function only succeeds when the
environment is fully observable.
Simple reflex agents
30. A Simple Reflex Agent in Nature
percepts
(size, motion)
RULES:
(1) If small moving object,
then activate SNAP
(2) If large moving object,
then activate AVOID and inhibit SNAP
ELSE (not moving) then NOOP
Action: SNAP or AVOID or NOOP
needed for
completeness
31. • Model based agents can handle partially
observable environments.
• Its current state is stored inside the agent
maintaining some kind of structure which
describes the part of the world which can’t
be seen. This behavior requires
information on how the world behaves and
works. This additional information
completes the “world view” model.
Model-based reflex agents
32. • A model – based reflex agent keeps track
of the current state of the world using an
internal model. It then chooses an action
in the same way as the reflex agent.
• The state changes whenever an action is
performed or something is perceived in the
environment.
Model-based reflex agents
33. Model-based Reflex Agents
For the world that is partially observable
the agent has to keep track of an internal state
That depends on the percept history
Reflecting some of the unobserved aspects
E.g., driving a car and changing lane
Requiring two types of knowledge
How the world evolves independently of the
agent
How the agent’s actions affect the world
34. Example Table Agent
With Internal State
Saw an object ahead, and
turned right, and it’s now clear
ahead
Go straight
Saw an object Ahead, turned
right, and object ahead again
Halt
See no objects ahead Go straight
See an object ahead Turn randomly
IF THEN
35. Example Reflex Agent With Internal State:
Wall-Following
Actions: left, right, straight, open-door
Rules:
1. If open(left) & open(right) and open(straight) then
choose randomly between right and left
2. If wall(left) and open(right) and open(straight) then straight
3. If wall(right) and open(left) and open(straight) then straight
4. If wall(right) and open(left) and wall(straight) then left
5. If wall(left) and open(right) and wall(straight) then right
6. If wall(left) and door(right) and wall(straight) then open-door
7. If wall(right) and wall(left) and open(straight) then straight.
8. (Default) Move randomly
start
38. Goal based agents
• Knowing about the current state of the
environment is not always enough to
decide what to do; additionally sort of goal
information which describes situations that
are desirable is also required. This allows
the agent a way to choose among multiple
possibilities , selecting the one which
reaches a goal state.
39. Goal-based agents
Current state of the environment is
always not enough
The goal is another issue to achieve
Judgment of rationality / correctness
Actions chosen goals, based on
the current state
the current percept
40. Goal-based agents
Conclusion
more flexible
Agent Different goals different tasks
Search and planning
two other sub-fields in AI
to find out the action sequences to achieve its goal
41. A model based - Goal based
agent
• It keeps track of the world state and a set
of goals it is trying to achieve and picks an
action that will eventually lead to the
achievement of its goal.
43. Utility based agents
• Goal based agents only distinguish between
goal states and non-goal states.
• It is possible to define a measure of how
desirable a particular state is. This measure can
be obtained through the use of a utility function
which maps a state to a measure of the utility of
the state. So, utility function maps a state onto a
real number, which describes the associated
degree of happiness.
• A complete specification of the utility function
allows rational decisions.
44. Utility-based agents
Goals alone are not enough
to generate high-quality behavior
E.g. meals in Canteen, good or not ?
Many action sequences the goals
some are better and some worse
If goal means success,
then utility means the degree of success
(how successful it is)
45. Utility-based agents
it is said state A has higher utility
If state A is more preferred than others
Utility is therefore a function
that maps a state onto a real number
the degree of success
46. Utility-based agents
Utility has several advantages:
When there are conflicting goals,
Only some of the goals but not all can be
achieved
utility describes the appropriate trade-off
When there are several goals
None of them are achieved certainly
utility provides a way for the decision-making
47. • It uses a model of the world along with a
utility function that measures its
preferences among states of the world.
Then it chooses an action that leads to the
best expected utility.
A model based-utility based
agent
49. Learning Agents
After an agent is programmed, can it
work immediately?
No, it still need teaching
In AI,
Once an agent is done
We teach it by giving it a set of examples
Test it by using another set of examples
We then say the agent learns
A learning agent
50. Learning Agents
Four conceptual components
Learning element
Making improvement
Performance element
Selecting external actions
Critic
Tells the Learning element how well the agent is doing with
respect to fixed performance standard.
(Feedback from user or examples, good or not?)
Problem generator
Suggest actions that will lead to new and informative
experiences.
51. • Learning has an advantage that it allows
the agents to initially operate in unknown
environments and to become more
competent than its initial knowledge alone
might allow.
Learning agents
52. • Components:-
– Learning element:- responsible for making
improvements.
– Performance elements:- responsible for selec
ting actions.
– Critics:- provides feedback
– Problem generator:- responsible for
suggesting actions that will lead to new
experiences.
Learning agents