The document discusses different types of intelligent agents. It defines agents as anything that can perceive its environment and act upon that environment. It provides examples of human, robotic, and software agents. The document also discusses rational agents and how they should strive to maximize their performance based on their perceptions and available actions. Finally, it outlines different types of environments that agents may operate in, including fully/partially observable, deterministic/stochastic, and single/multi-agent environments.
The document provides an overview of artificial intelligence (AI) and intelligent agents. It defines AI as the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence. An intelligent agent is described as anything that perceives its environment, takes actions autonomously to achieve goals, and may improve its performance through learning or using knowledge. The key components of an intelligent agent are described as its architecture, agent function that maps perceptions to actions, and agent program that implements the function. Different types of agents are discussed including simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents.
This document discusses intelligent agents and their environments. It defines agents as things that perceive their environment and act on it. Rational agents are those that select actions expected to maximize their performance measure given what they perceive. The PEAS framework describes an agent's Performance measure, Environment, Actuators, and Sensors. Environment types include fully/partially observable, deterministic/stochastic, and more. Agent types range from simple reflex agents to more complex goal-based and utility-based agents.
An intelligent agent is an autonomous entity that perceives its environment and takes actions to maximize its chances of successfully achieving its goals. It has sensors to observe the environment and actuators to perform actions. A rational intelligent agent selects actions that are expected to be most useful based on its past experiences and built-in knowledge. Specifying the task environment through PEAS - performance measure, environment, actuators, and sensors - helps define the problem an intelligent agent aims to solve.
This document provides an overview of intelligent agents, including:
1) It defines agents and environments, and describes how agents perceive their environment through sensors and act through actuators. Examples of human and robotic agents are given.
2) It introduces the concept of rational agents and the PEAS framework for defining an agent's Performance measure, Environment, Actuators, and Sensors. Several examples of applying PEAS are described.
3) It describes different types of environments an agent may operate in, including fully/partially observable, deterministic/stochastic, and discrete/continuous.
The document defines key concepts in artificial intelligence including intelligent agents, environments, and rational agents. An intelligent agent is anything that can perceive its environment and take actions to achieve its goals. Rational agents aim to maximize their performance measure given their percepts and knowledge. Different types of agents are described including reflex agents, model-based agents, goal-based agents, and utility-based agents. State representations and learning agents are also covered at a high level.
1) An intelligent agent is anything that perceives its environment through sensors and acts upon the environment through effectors. Agents can be humans, robots, software programs, etc.
2) The PEAS framework is used to specify the setting for designing an intelligent agent by defining the Performance measure, Environment, Actuators, and Sensors.
3) There are different types of environments including fully/partially observable, deterministic/stochastic, and static/dynamic environments that impact agent design.
The document provides an overview of artificial intelligence (AI) and intelligent agents. It defines AI as the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence. An intelligent agent is described as anything that perceives its environment, takes actions autonomously to achieve goals, and may improve its performance through learning or using knowledge. The key components of an intelligent agent are described as its architecture, agent function that maps perceptions to actions, and agent program that implements the function. Different types of agents are discussed including simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents.
This document discusses intelligent agents and their environments. It defines agents as things that perceive their environment and act on it. Rational agents are those that select actions expected to maximize their performance measure given what they perceive. The PEAS framework describes an agent's Performance measure, Environment, Actuators, and Sensors. Environment types include fully/partially observable, deterministic/stochastic, and more. Agent types range from simple reflex agents to more complex goal-based and utility-based agents.
An intelligent agent is an autonomous entity that perceives its environment and takes actions to maximize its chances of successfully achieving its goals. It has sensors to observe the environment and actuators to perform actions. A rational intelligent agent selects actions that are expected to be most useful based on its past experiences and built-in knowledge. Specifying the task environment through PEAS - performance measure, environment, actuators, and sensors - helps define the problem an intelligent agent aims to solve.
This document provides an overview of intelligent agents, including:
1) It defines agents and environments, and describes how agents perceive their environment through sensors and act through actuators. Examples of human and robotic agents are given.
2) It introduces the concept of rational agents and the PEAS framework for defining an agent's Performance measure, Environment, Actuators, and Sensors. Several examples of applying PEAS are described.
3) It describes different types of environments an agent may operate in, including fully/partially observable, deterministic/stochastic, and discrete/continuous.
The document defines key concepts in artificial intelligence including intelligent agents, environments, and rational agents. An intelligent agent is anything that can perceive its environment and take actions to achieve its goals. Rational agents aim to maximize their performance measure given their percepts and knowledge. Different types of agents are described including reflex agents, model-based agents, goal-based agents, and utility-based agents. State representations and learning agents are also covered at a high level.
1) An intelligent agent is anything that perceives its environment through sensors and acts upon the environment through effectors. Agents can be humans, robots, software programs, etc.
2) The PEAS framework is used to specify the setting for designing an intelligent agent by defining the Performance measure, Environment, Actuators, and Sensors.
3) There are different types of environments including fully/partially observable, deterministic/stochastic, and static/dynamic environments that impact agent design.
This document discusses intelligent agents and their design. It begins by defining an agent as anything that can perceive its environment and act upon it. It then describes different types of agents including human agents, robotic agents, and software agents. It introduces the concepts of percepts, actions, and agent functions. It also discusses rational agents and the requirements for rational behavior. Finally, it covers different aspects of agent design including performance measures, environments, actuators, sensors (PEAS), environment types, and the four basic types of agents from simple reflex agents to utility-based agents.
This document discusses different types of intelligent agents. It describes four basic types of agent programs: simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. Simple reflex agents select actions based only on the current percept, while model-based reflex agents maintain an internal model of the world. Goal-based agents use goals to determine desirable situations. Utility-based agents maximize an internal utility function that represents the performance measure. The document also discusses agent functions, percepts, environments, and the PEAS properties of task environments.
The document discusses different types of intelligent agents and their characteristics. It defines an agent as anything that can perceive its environment and act upon it. Example agent types include human agents, robotic agents, and software agents. The document also discusses windshield wiper agents as an example and covers agent terminology such as goals, percepts, sensors, effectors, and actions. Later sections discuss rational agents and how they are designed to maximize their performance based on their percept sequences and knowledge. Different types of agents are introduced, including simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. The document also covers properties of task environments and the structure of agents.
An intelligent agent perceives its environment through sensors, acts upon the environment through effectors or actuators, and aims to maximize a performance measure in its environment. The document discusses different types of agent programs including table-driven agents, reflex agents, model-based reflex agents, goal-based agents, and learning agents. It also examines properties of task environments such as being fully or partially observable, deterministic or stochastic, episodic or sequential, static or dynamic, discrete or continuous, single-agent or multi-agent.
This document discusses different types of agents and environments for artificial intelligence. It defines an agent as anything that perceives its environment and acts upon it. Agents can be human, robotic, or other entities. Environments are characterized by percepts, actions, and a performance measure for evaluating an agent's success. Rational agents aim to maximize their performance based on percept sequences and built-in knowledge. More advanced agent types include model-based reflex agents that maintain internal models, goal-based agents that act based on goals, and utility-based agents that optimize for utility functions. Learning agents can improve over time using feedback from a critic component.
The document provides an overview of foundations of artificial intelligence including definitions of AI, different views of AI, and capabilities required for a computer to pass a total Turing test. It discusses thinking humanly by modeling human cognition and thinking rationally using logic. It describes acting humanly using the Turing test and acting rationally by being a rational agent that takes actions to achieve goals. The document also covers applications of AI, types of environments and agents, and different approaches to machine learning including reflex agents, model-based agents, goal-based agents, and utility-based agents.
An agent is anything that perceives its environment through sensors and acts upon it through actuators. A rational agent aims to maximize its performance measure by selecting actions expected to have the best outcome, given its percepts and built-in knowledge. To design a rational agent, its task environment must be specified using the PEAS framework of Performance measure, Environment, Actuators, and Sensors. There are four main types of agents: simple reflex agents that react solely based on current percepts; model-based reflex agents that also consider past states; goal-based agents that take future goals into account; and utility-based agents that choose actions to maximize expected utility or happiness.
An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators
Operates in an environment
Perceive its environment through sensors
Acts upon its environment through actuators/ effectors
Has Goals
This document discusses intelligent agents and their environments. It defines an agent as anything that perceives its environment and acts upon it. Rational agents are those that select actions expected to maximize their performance based on perceptions. An agent's task environment consists of its performance measure, environment, actuators, and sensors. Environment types include fully/partially observable, deterministic/stochastic, and single/multi-agent. Four basic agent types are described: simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. Learning agents use feedback to improve performance over time. The document provides examples of agents and discusses their design considerations based on task environment properties.
Intelligent agents are anything that perceives its environment through sensors and acts to achieve goals. They can be described using the PAGE framework of percepts, actions, goals, and environment. Rational agents choose actions that are expected to maximize performance given past experiences. Different agent types include reflex, state-based, goal-based, utility-based, and learning agents.
This document provides an overview of intelligent agents and their environments. It defines key concepts like agents, environments, percepts, actions, and rational agents. It also describes different types of agents including table-driven agents and simple reflex agents. Additionally, it covers the structure of agents and discusses how they are composed of an architecture and program. It examines properties of environments like observability, determinism, and how real-world environments tend to be partially observable and stochastic.
This document discusses intelligent agents and their characteristics. It defines an intelligent agent as an autonomous entity that can perceive its environment through sensors and act upon the environment through effectors to achieve goals. Intelligent agents must be able to perceive, make decisions based on perceptions, take actions as a result of decisions, and take rational actions. The document also describes different types of agents including simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents. Finally, it discusses environments that agents can operate in based on characteristics like observability, predictability, event structure, existence of other agents, consistency, certainty, and accessibility.
Artificial Intelligence Chapter two agentsEhsan Nowrouzi
The document outlines different types of intelligent agents and their environments. It discusses agents and environments, rational agents, and the PEAS framework for specifying an agent's Performance measure, Environment, Actuators, and Sensors. Environment types include fully/partially observable, deterministic/stochastic, episodic/sequential, static/dynamic, discrete/continuous, and single/multi-agent. The document also covers agent functions, programs, and basic types including simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents.
An AI assistant summarizes the key points about the Turing Test from the document:
1) The Turing Test proposes that a computer can be considered intelligent if an interrogator cannot distinguish it from a human via conversation.
2) Notable chatbots that have attempted the Turing Test include ELIZA, Parry, and Eugene Goostman. Eugene Goostman convinced 29% of judges it was human.
3) Critics argue that passing the Turing Test does not prove a machine has human-level understanding, as it can mimic responses without true comprehension.
Introduction of agents, Structure(configuration) of Intelligent agent,
Properties of Intelligent Agents
2.2. PEAS Description of Agents
2.3. Types of Agents: Simple Reflexive, Model Based, Goal Based, Utility Based,
Learning agent.
2.4. Types of Environments: Deterministic/Stochastic, Static/Dynamic,
Observable/Semi-observable, Single Agent/Multi Agent
The document discusses different types of intelligent agents in artificial intelligence. It defines an agent as anything that can perceive its environment through sensors and act upon the environment through effectors. It then describes different types of agent programs:
1) Simple reflex agents that take actions based solely on current percepts.
2) Model-based reflex agents that maintain an internal state based on past percepts allowing them to operate in partially observable environments.
3) Goal-based agents that choose actions to achieve goals by considering long sequences of possible actions through planning.
4) Utility-based agents that act not just based on goals but also to maximize utility or success at achieving goals.
5) Learning agents that can
1) Intelligent agents are systems that perceive their environment and act upon it. They can be designed to act or think rationally or humanly.
2) An agent is anything that can perceive its environment through sensors and act upon the environment through effectors. Agents perceive the environment via sensors and act with effectors, mapping percept sequences to actions.
3) Key properties of intelligent agents include autonomy, reactivity, proactiveness, balancing reactive and goal-oriented behavior, and social ability. Agents must be able to operate independently, respond to changes, pursue goals, and interact with other agents.
This document discusses intelligent agents and their environments. It defines an agent as anything that perceives its environment through sensors and acts upon the environment through actuators. Rational agents aim to maximize their performance measure by selecting actions expected to have the best outcome based on their percepts. An agent's task environment is defined by its performance measure, environment, actuators, and sensors (PEAS). Environment types are classified by whether they are observable, deterministic, episodic, static, discrete, or single-agent. Basic agent architectures include simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents.
This document discusses intelligent agents and their design. It begins by defining an agent as anything that can perceive its environment and act upon it. It then describes different types of agents including human agents, robotic agents, and software agents. It introduces the concepts of percepts, actions, and agent functions. It also discusses rational agents and the requirements for rational behavior. Finally, it covers different aspects of agent design including performance measures, environments, actuators, sensors (PEAS), environment types, and the four basic types of agents from simple reflex agents to utility-based agents.
This document discusses different types of intelligent agents. It describes four basic types of agent programs: simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. Simple reflex agents select actions based only on the current percept, while model-based reflex agents maintain an internal model of the world. Goal-based agents use goals to determine desirable situations. Utility-based agents maximize an internal utility function that represents the performance measure. The document also discusses agent functions, percepts, environments, and the PEAS properties of task environments.
The document discusses different types of intelligent agents and their characteristics. It defines an agent as anything that can perceive its environment and act upon it. Example agent types include human agents, robotic agents, and software agents. The document also discusses windshield wiper agents as an example and covers agent terminology such as goals, percepts, sensors, effectors, and actions. Later sections discuss rational agents and how they are designed to maximize their performance based on their percept sequences and knowledge. Different types of agents are introduced, including simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. The document also covers properties of task environments and the structure of agents.
An intelligent agent perceives its environment through sensors, acts upon the environment through effectors or actuators, and aims to maximize a performance measure in its environment. The document discusses different types of agent programs including table-driven agents, reflex agents, model-based reflex agents, goal-based agents, and learning agents. It also examines properties of task environments such as being fully or partially observable, deterministic or stochastic, episodic or sequential, static or dynamic, discrete or continuous, single-agent or multi-agent.
This document discusses different types of agents and environments for artificial intelligence. It defines an agent as anything that perceives its environment and acts upon it. Agents can be human, robotic, or other entities. Environments are characterized by percepts, actions, and a performance measure for evaluating an agent's success. Rational agents aim to maximize their performance based on percept sequences and built-in knowledge. More advanced agent types include model-based reflex agents that maintain internal models, goal-based agents that act based on goals, and utility-based agents that optimize for utility functions. Learning agents can improve over time using feedback from a critic component.
The document provides an overview of foundations of artificial intelligence including definitions of AI, different views of AI, and capabilities required for a computer to pass a total Turing test. It discusses thinking humanly by modeling human cognition and thinking rationally using logic. It describes acting humanly using the Turing test and acting rationally by being a rational agent that takes actions to achieve goals. The document also covers applications of AI, types of environments and agents, and different approaches to machine learning including reflex agents, model-based agents, goal-based agents, and utility-based agents.
An agent is anything that perceives its environment through sensors and acts upon it through actuators. A rational agent aims to maximize its performance measure by selecting actions expected to have the best outcome, given its percepts and built-in knowledge. To design a rational agent, its task environment must be specified using the PEAS framework of Performance measure, Environment, Actuators, and Sensors. There are four main types of agents: simple reflex agents that react solely based on current percepts; model-based reflex agents that also consider past states; goal-based agents that take future goals into account; and utility-based agents that choose actions to maximize expected utility or happiness.
An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators
Operates in an environment
Perceive its environment through sensors
Acts upon its environment through actuators/ effectors
Has Goals
This document discusses intelligent agents and their environments. It defines an agent as anything that perceives its environment and acts upon it. Rational agents are those that select actions expected to maximize their performance based on perceptions. An agent's task environment consists of its performance measure, environment, actuators, and sensors. Environment types include fully/partially observable, deterministic/stochastic, and single/multi-agent. Four basic agent types are described: simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. Learning agents use feedback to improve performance over time. The document provides examples of agents and discusses their design considerations based on task environment properties.
Intelligent agents are anything that perceives its environment through sensors and acts to achieve goals. They can be described using the PAGE framework of percepts, actions, goals, and environment. Rational agents choose actions that are expected to maximize performance given past experiences. Different agent types include reflex, state-based, goal-based, utility-based, and learning agents.
This document provides an overview of intelligent agents and their environments. It defines key concepts like agents, environments, percepts, actions, and rational agents. It also describes different types of agents including table-driven agents and simple reflex agents. Additionally, it covers the structure of agents and discusses how they are composed of an architecture and program. It examines properties of environments like observability, determinism, and how real-world environments tend to be partially observable and stochastic.
This document discusses intelligent agents and their characteristics. It defines an intelligent agent as an autonomous entity that can perceive its environment through sensors and act upon the environment through effectors to achieve goals. Intelligent agents must be able to perceive, make decisions based on perceptions, take actions as a result of decisions, and take rational actions. The document also describes different types of agents including simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents. Finally, it discusses environments that agents can operate in based on characteristics like observability, predictability, event structure, existence of other agents, consistency, certainty, and accessibility.
Artificial Intelligence Chapter two agentsEhsan Nowrouzi
The document outlines different types of intelligent agents and their environments. It discusses agents and environments, rational agents, and the PEAS framework for specifying an agent's Performance measure, Environment, Actuators, and Sensors. Environment types include fully/partially observable, deterministic/stochastic, episodic/sequential, static/dynamic, discrete/continuous, and single/multi-agent. The document also covers agent functions, programs, and basic types including simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents.
An AI assistant summarizes the key points about the Turing Test from the document:
1) The Turing Test proposes that a computer can be considered intelligent if an interrogator cannot distinguish it from a human via conversation.
2) Notable chatbots that have attempted the Turing Test include ELIZA, Parry, and Eugene Goostman. Eugene Goostman convinced 29% of judges it was human.
3) Critics argue that passing the Turing Test does not prove a machine has human-level understanding, as it can mimic responses without true comprehension.
Introduction of agents, Structure(configuration) of Intelligent agent,
Properties of Intelligent Agents
2.2. PEAS Description of Agents
2.3. Types of Agents: Simple Reflexive, Model Based, Goal Based, Utility Based,
Learning agent.
2.4. Types of Environments: Deterministic/Stochastic, Static/Dynamic,
Observable/Semi-observable, Single Agent/Multi Agent
The document discusses different types of intelligent agents in artificial intelligence. It defines an agent as anything that can perceive its environment through sensors and act upon the environment through effectors. It then describes different types of agent programs:
1) Simple reflex agents that take actions based solely on current percepts.
2) Model-based reflex agents that maintain an internal state based on past percepts allowing them to operate in partially observable environments.
3) Goal-based agents that choose actions to achieve goals by considering long sequences of possible actions through planning.
4) Utility-based agents that act not just based on goals but also to maximize utility or success at achieving goals.
5) Learning agents that can
1) Intelligent agents are systems that perceive their environment and act upon it. They can be designed to act or think rationally or humanly.
2) An agent is anything that can perceive its environment through sensors and act upon the environment through effectors. Agents perceive the environment via sensors and act with effectors, mapping percept sequences to actions.
3) Key properties of intelligent agents include autonomy, reactivity, proactiveness, balancing reactive and goal-oriented behavior, and social ability. Agents must be able to operate independently, respond to changes, pursue goals, and interact with other agents.
This document discusses intelligent agents and their environments. It defines an agent as anything that perceives its environment through sensors and acts upon the environment through actuators. Rational agents aim to maximize their performance measure by selecting actions expected to have the best outcome based on their percepts. An agent's task environment is defined by its performance measure, environment, actuators, and sensors (PEAS). Environment types are classified by whether they are observable, deterministic, episodic, static, discrete, or single-agent. Basic agent architectures include simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Physiology and chemistry of skin and pigmentation, hairs, scalp, lips and nail, Cleansing cream, Lotions, Face powders, Face packs, Lipsticks, Bath products, soaps and baby product,
Preparation and standardization of the following : Tonic, Bleaches, Dentifrices and Mouth washes & Tooth Pastes, Cosmetics for Nails.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
3. 3
Agents
• An agent is anything that can be viewed as
– perceiving its environment through sensors and
– acting upon that environment through actuators
– Assumption: Every agent can perceive its own actions (but not
always the effects)
4. 4
Example: Vacuum-Agent
• Percepts:
Location and status,
e.g., [A,Dirty]
• Actions:
Left, Right, Suck, NoOp
function Vacuum-Agent([location,status]) returns an action
• if status = Dirty then return Suck
• else if location = A then return Right
• else if location = B then return Left
5. 5
Agents
• Human agent:
– eyes, ears, and other organs for sensors;
– hands,legs, mouth, and other body parts for actuators
• Robotic agent:
– cameras and infrared range finders for sensors;
– various motors for actuators
• A software agent:
– Keystrokes, file contents, received network packages as sensors
– Displays on the screen, files, sent network packets as actuators
6. 6
Agents and environments
• Percept: agent’s perceptual input at any
given instant
• Percept sequence: complete history of everything the agent has ever
perceived
• An agent’s choice of action at any given instant can depend on the
entire percept sequence observed to date
• An agent’s behavior is described by the agent function which maps
from percept histories to actions:
[f: P* A]
• We can imagine tabulating the agent function that describes any
given agent (External characterization)
• Internally, the agent function will be implemented by an agent
program which runs on the physical architecture to produce f
• agent = architecture + program
7. 7
Rational agents
• An agent should strive to "do the right thing", based on what it can perceive and the
actions it can perform.
• The right action is the one that will cause the agent to be most successful
• Performance measure: An objective criterion for success of an agent's behavior
• E.g., performance measure of a vacuum-cleaner agent could be amount of dirt
cleaned up, amount of time taken, amount of electricity consumed, amount of noise
generated, etc.
• As a general rule, it is better to design performance measures according to what one
actually wants in the environment. Rather than according to how one thinks the
agent should behave (amount of dirt cleaned vs a clean floor)
• A more suitable measure would reward the agent for having a clean floor
8. 8
Rationality
• What is rational at any given time depends on four things
– The performance measure that defines the criterion of success
– The agent’s prior knowledge of the environment
– The actions that the agent can perform
– The agent’s percept sequence to date
9. 9
Rational agents
• For each possible percept sequence, a rational agent
should select an action that is expected to maximize
its performance measure, given the evidence
provided by the percept sequence and the agent’s
built-in knowledge
• Performance measure (utility function):
An objective criterion for success of an agent's
behavior
• Expected utility:
• Can a rational agent make mistakes?
EU(action) = P(outcome | action)U(outcome)
outcomes
å
10. 10
Back to Vacuum-Agent
• Percepts:
Location and status,
e.g., [A,Dirty]
• Actions:
Left, Right, Suck, NoOp
function Vacuum-Agent([location,status]) returns an action
• if status = Dirty then return Suck
• else if location = A then return Right
• else if location = B then return Left
• Is this agent rational?
– Depends on performance measure, environment properties
11. 11
Vacuum cleaner agent
• Let’s assume the following
– The performance measure awards one point for each clean square
at each time step, over a lifetime of 1000 time steps
– The geography of the environment is known a priori but the dirt
distribution and the initial location of the agent are not. Clean
squares stay clean and the sucking cleans the current square. The
Left and Right actions move the agent left and right except when
this would take the agent outside the environment, in which case
the agent remains where it is
– The only available actions are Left, Right, Suck and NoOp
– The agent correctly perceives its location and whether that location
contains dirt
• Under these circumstances the agent is rational:its
expected performance is at least as high as any other
agent’s
12. 12
Vacuum cleaner agent
• Same agent would be irrational under different circumstances
– once all dirt is cleaned up it will oscillate needlessly back and forth.
– If the performance measure includes a penalty of one point for each
movement left or right, the agent will fare poorly.
– A better agent for this case would do nothing once it is sure that all the
squares are clean.
– If the clean squares can become dirty again, the agent should occasionally
check and clean them if needed.
– If the geography of the environment is unknown the agent will need to
explore it rather than stick to squares A and B
13. 13
Rational agents
• Rationality is distinct from omniscience (all-knowing with
infinite knowledge)
• Rationality maximizes expected performance while
perfection maximizes actual performance
• Agents can perform actions in order to modify future
percepts so as to obtain useful information (information
gathering, exploration)
• An agent is autonomous if its behavior is determined by its
own experience (with ability to learn and adapt)
14. 14
Specifying the task environment (PEAS)
• PEAS:
– Performance measure,
– Environment,
– Actuators,
– Sensors
• In designing an agent, the first step must always
be to specify the task environment (PEAS) as fully
as possible
15. 15
Specifying the task environment
• PEAS: Performance measure, Environment, Actuators,
Sensors
• P: a function the agent is maximizing (or minimizing)
– Assumed given
– In practice, needs to be computed somewhere
• E: a formal representation for world states
– For concreteness, a tuple (var1=val1, var2=val2, … ,varn=valn)
• A: actions that change the state according to a transition
model
– Given a state and action, what is the successor state
(or distribution over successor states)?
• S: observations that allow the agent to infer the world state
– Often come in very different form than the state itself
– E.g., in tracking, observations may be pixels and state variables 3D
coordinates
17. 17
Another PEAS example: Spam filter
• Performance measure
– Minimizing false positives, false negatives
• Environment
– A user’s email account, email server
• Actuators
– Mark as spam, delete, etc.
• Sensors
– Incoming messages, other information about
user’s account
18. 18
PEAS for a medical diagnosis system
• Performance measure: Healthy patient, minimize costs, lawsuits
• Environment: Patient, hospital, staff
• Actuators: Screen display (questions, tests, diagnoses, treatments,
referrals)
• Sensors: Keyboard (entry of symptoms, findings, patient's
answers)
19. 19
PEAS for a satellite image analysis system
• Performance measure: correct image categorization
• Environment: downlink from orbiting satellite
• Actuators: display categorization of scene
• Sensors: color pixel arrays
20. 20
PEAS for a part-picking robot
• Performance measure: Percentage of parts in correct bins
• Environment: Conveyor belt with parts, bins
• Actuators: Jointed arm and hand
• Sensors: Camera, joint angle sensors
21. 21
PEAS for a refinery controller
• Performance measure: maximize purity, yield, safety
• Environment: refinery, operators
• Actuators: valves, pumps, heaters, displays
• Sensors: temperature, pressure, chemical sensors
22. 22
PEAS for Interactive English tutor
• Performance measure: Maximize student's score on test
• Environment: Set of students
• Actuators: Screen display (exercises, suggestions, corrections)
• Sensors: Keyboard
24. 24
Environment types
• Fully observable vs. partially observable
• Deterministic vs. stochastic
• Episodic vs. sequential
• Static vs. dynamic
• Discrete vs. continuous
• Single agent vs. multiagent
25. 25
Environment types
Fully observable vs. partially observable:
• An environment is fully observable if an agent's sensors
give it access to the complete state of the environment at
each point in time.
• Fully observable environments are convenient, because the
agent need not maintain any internal state to keep track of
the world
• An environment might be partially observable because of
noisy and inaccurate sensors or because parts of the state
are simply missing from the sensor data
• Examples: vacuum cleaner with local dirt sensor, taxi
driver
26. 26
Environment types
Deterministic vs. stochastic:
• The environment is deterministic if the next state of the
environment is completely determined by the current state
and the action executed by the agent.
• In principle, an agent need not worry about uncertainty in a
fully observable, deterministic environment
• If the environment is partially observable then it could
appear to be stochastic
• Examples: Vacuum world is deterministic while taxi driver
is not
• If the environment is deterministic except for the actions of
other agents, then the environment is strategic
27. 27
Environment types
Episodic vs. sequential:
• In episodic environments, the agent's experience is divided
into atomic "episodes" (each episode consists of the agent
perceiving and then performing a single action), and the
choice of action in each episode depends only on the
episode itself.
• Examples: classification tasks
• In sequential environments, the current decision could
affect all future decisions
• Examples: chess and taxi driver
28. 28
Environment types
Static vs. dynamic:
• The environment is unchanged while an agent is deliberating.
• Static environments are easy to deal with because the agent need not
keep looking at the world while it is deciding on the action or need it
worry about the passage of time
• Dynamic environments continuously ask the agent what it wants to do
• The environment is semi-dynamic if the environment itself does not
change with the passage of time but the agent's performance score
does
• Examples: taxi driving is dynamic, chess when played with a clock is
semi-dynamic, crossword puzzles are static
29. 29
Environment types
Discrete vs. continuous:
• A limited number of distinct, clearly defined states, percepts and
actions.
• Examples: Chess has finite number of discrete states, and has
discrete set of percepts and actions. Taxi driving has continuous
states, and actions
30. 30
Environment types
Single agent vs. multiagent:
• An agent operating by itself in an environment is single agent
• Examples: Crossword is a single agent while chess is two-agents
• Question: Does an agent A have to treat an object B as an agent or can
it be treated as a stochastically behaving object
• Whether B's behaviour is best described by as maximizing a
performance measure whose value depends on agent's A behaviour
• Examples: chess is a competitive multiagent environment while taxi
driving is a partially cooperative multiagent environment
32. 32
Examples of different environments
Observable
Deterministic
Episodic
Static
Discrete
Single agent
Fully Partially Partially
Strategic Stochastic Stochastic
Sequential Sequential Sequential
Semidynamic Dynamic
Static
Discrete Discrete Continuous
Multi Multi Multi
Fully
Deterministic
Episodic
Static
Discrete
Single
Chess with
a clock
Scrabble Autonomous
driving
Word jumble
solver
33. 33
Preview of the course
• Deterministic environments: search, constraint
satisfaction, logic
– Can be sequential or episodic
• Multi-agent, strategic environments: minimax
search, games
– Can also be stochastic, partially observable
• Stochastic environments
– Episodic: Bayesian networks, pattern classifiers
– Sequential, known: Markov decision processes
– Sequential, unknown: reinforcement learning
34. 34
Agent functions and programs
• An agent is completely specified by the agent
function mapping percept sequences to actions
• One agent function (or a small equivalence class)
is rational
• Aim: find a way to implement the rational agent
function concisely -> design an agent program
Agent = agent program + architecture
• Architecture: some sort of computing device with
physical sensors and actuators (PC, robotic car)
– should be appropriate: walk action requires legs
35. 35
Agent functions and programs
• Agent program:
– Takes the current percept as input from the sensors
– Return an action to the actuators
– While agent function takes the whole percept history,
agent program takes just the current percept as input
which the only available input from the environment
– The agent need to remember the whole percept
sequence, if it needs it
36. 36
Table-lookup agent
• A trivial agent program: keeps track of the percept sequence and then uses it to
index into a table of actions to decide what to do
• The designers must construct the table that contains the appropriate action for
every possible percept sequence
function TABLE-DRIVEN-AGENT(percept) returns an action
static: percepts, a sequence, initially empty
table, a table of actions, indexed by percept sequences, initially fully specified
append percept to the end of percepts
action <--LOOKUP(percepts, table)
return action
• Drawbacks:
– Huge table (P^T , P: set of possible percepts, T: lifetime)
•Space to store the table
•Take a long time to build the table
•No autonomy
•Even with learning, need a long time to learn the table entries
37. 37
Agent types
• Rather than a table how we can produce rational
behavior from a small amount of code
• Four basic types in order of increasing generality:
– Simple reflex agents
– Model-based reflex agents
– Goal-based agents
– Utility-based agents
38. 38
Simple reflex agents
function REFLEX-VACUUM-AGENT([location,status]) returns an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
• Select actions on the basis of the current percept ignoring
the rest of the percept history
• Example: simple reflex vacuum cleaner agent
• Condition-action-rule
• Example: if car-in-front-is-breaking then initiate-breaking
40. 40
Simple reflex agents
• Simple-reflex agents are simple, but they turn out
to be of very limited intelligence
• The agent will work only if the correct decision
can be made on the basis of the current percept –
that is only if the environment is fully observable
• Infinite loops are often unavoidable – escape could
be possible by randomizing
function SIMPLE-REFLEX-AGENT(percept) returns an action
static: rules, a set if condition-action rules
state <-- INTERPRET_INPUT(percept)
rule <-- RULE_MATCH(state, rules)
action <-- RULE_ACTION[rule]
return action
41. 41
Model-based reflex agents
• The agent should keep track of the part of the world it can't see now
• The agent should maintain some sort of internal state that depends
on the percept history and reflects at least some of the unobserved
aspects of the current state
• Updating the internal state information as time goes by requires two
kinds of knowledge to be encoded in the agent program
– Information about how the world evolves independently of the
agent
– Information about how the agent's own actions affects the world
• Model of the world – model based agents
43. 43
Model-based reflex agents
function REFLEX-AGENT-WITH-STATE(percept) returns an action
static: state, a description of the current world state
rules, a set of condition-action rules
action, the most recent action, initially none
state <-- UPDATE_INPUT(state, action, percept)
rule <-- RULE_MATCH(state, rules)
action <-- RULE_ACTION[rule]
return action
44. 44
Goal-based agents
• Knowing about the current state of the environment
is not always enough to decide what to do (e.g.
decision at a road junction)
• The agent needs some sort of goal information that
describes situations that are desirable
• The agent program can combine this with
information about the results of possible actions in
order to choose actions that achieve the goal
• Usually requires search and planning
46. 46
Goal-based agents vs reflex-based agents
• Although goal-based agents appears less efficient, it
is more flexible because the knowledge that
supports its decision is represented explicitly and
can be modified
• On the other hand, for the reflex-agent, we would
have to rewrite many condition-action rules
• The goal based agent's behavior can easily be
changed
• The reflex agent's rules must be changed for a new
situation
47. 47
Utility-based agents
• Goals alone are not really enough to generate high
quality behavior in most environments – they just
provide a binary distinction between happy and
unhappy states
• A more general performance measure should allow
a comparison of different world states according to
exactly how happy they would make the agent if
they could be achieved
• Happy – Utility (the quality of being useful)
• A utility function maps a state onto a real number
which describes the associated degree of happiness
49. 49
Learning agents
• Turing – instead of actually programming
intelligent machines by hand, which is too much
work, build learning machines and then teach them
• Learning also allows the agent to operate in initially
unknown environments and to become more
competent than its initial knowledge alone might
allow
51. 51
Learning agents
• Learning element – responsible for making
improvements
• Performance element – responsible for selecting
external actions (it is what we had defined as the
entire agent before)
• Learning element uses feedback from the critic on
how the agent is doing and determines how the
performance element should be modified to do
better in the future
• Problem generator is responsible for suggesting
actions that will lead to a new and informative
experiences