This document discusses different types of intelligent agents and their environments. It describes agents as anything that perceives its environment and acts upon it. Environments can be fully or partially observable, deterministic or stochastic, episodic or sequential, static or dynamic, discrete or continuous, and involve single or multiple agents. Agent types include simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. Learning agents can modify their performance based on feedback. Rational agents aim to maximize their performance measure given their percepts. The document outlines different combinations of agent architectures and environment types relevant to artificial intelligence.
www.funFruit.com
Intelligent Agent
www.funFruit.com
Intelligent Agent
What is Agent?
Agents and Environments
Rational Agents
PEAS Examples:
Agent Types
Table Driven Agents
Reflex agents
Goal-based agents
Utility-based agents
Learning agents
Environment Types
www.funFruit.com
Intelligent Agent
An agent is anything that can be viewed as
perceiving its environment through sensors
and acting upon that environment through
actuators
www.funFruit.com
Intelligent Agent
Agents and environments
An agent perceives its environment through
sensors.
The complete set of inputs at a given time is
called a percept.
The current percept, or a sequence of
percepts can influence the actions of an agent. The agent can change the
environment through actuators or
effectors.
An operation involving an effectors is
called an action.
Actions can be grouped into action
sequences.
5
www.funFruit.com
Intelligent Agent
Agents
Human agent:
eyes, ears, and other organs for sensors;
hands, legs, mouth, and other body parts for actuators
Robotic agent:
• cameras and infrared range finders for sensors;
• various motors for actuators..
• Example AIBO entertainment robot from SONY
A software agent:
Keystrokes, file contents, received network packages as sensors
Displays on the screen, files, sent network packets as actuators
6
www.funFruit.com
Intelligent Agent
Agent Example
A simple example of an agent in a physical environment is a thermostat for a
heater.
The thermostat receives input from a sensor, which is
embedded in the environment, to detect the temperature.
Two states:
temperature too cold
temperature OK are possible.
The first action has the effect of raising the room temperature, but this is not
guaranteed. If cold air continuously comes into the room, the added heat may
not have the desired effect of raising the room temperature.
Each state has an associated action:
too cold turn the heating on
temperature OK turn the heating off.
7
www.funFruit.com
Intelligent Agent
Agent function and program
Agent function
An agent is completely specified by the agent function
[f: P* -> A]
mapping percept sequences to actions
Agent program
The agent program runs on the physical architecture to produce f
AGENT = ARCHITECTURE + PROGRAM
.
8
www.funFruit.com
Intelligent Agent
Vacuum-cleaner world
Percepts: location and state of the environment, e.g.,
[A,Dirty], [B,Clean]
Actions: Left, Right,
Suck, NoOp
9
www.funFruit.com
Intelligent Agent
Agent Performance
How do you define success? Need a performance measure
How the agent does successfully
E.g., 90% or 30% ?
Eg. reward agent with one point for each clean square at each time step
(could penalize for costs and noise)
The success can be.
Intelligent Agents, A discovery on How A Rational Agent ActsSheetal Jain
Because this concept of developing a smart set of design principles for building successful agents, systems that can reasonably be called intelligent, is Central to artificial intelligence we need to know its thinking and action approach. This PPT covers this topic in detail.
Go and take a look and share your suggestions with me.
www.funFruit.com
Intelligent Agent
www.funFruit.com
Intelligent Agent
What is Agent?
Agents and Environments
Rational Agents
PEAS Examples:
Agent Types
Table Driven Agents
Reflex agents
Goal-based agents
Utility-based agents
Learning agents
Environment Types
www.funFruit.com
Intelligent Agent
An agent is anything that can be viewed as
perceiving its environment through sensors
and acting upon that environment through
actuators
www.funFruit.com
Intelligent Agent
Agents and environments
An agent perceives its environment through
sensors.
The complete set of inputs at a given time is
called a percept.
The current percept, or a sequence of
percepts can influence the actions of an agent. The agent can change the
environment through actuators or
effectors.
An operation involving an effectors is
called an action.
Actions can be grouped into action
sequences.
5
www.funFruit.com
Intelligent Agent
Agents
Human agent:
eyes, ears, and other organs for sensors;
hands, legs, mouth, and other body parts for actuators
Robotic agent:
• cameras and infrared range finders for sensors;
• various motors for actuators..
• Example AIBO entertainment robot from SONY
A software agent:
Keystrokes, file contents, received network packages as sensors
Displays on the screen, files, sent network packets as actuators
6
www.funFruit.com
Intelligent Agent
Agent Example
A simple example of an agent in a physical environment is a thermostat for a
heater.
The thermostat receives input from a sensor, which is
embedded in the environment, to detect the temperature.
Two states:
temperature too cold
temperature OK are possible.
The first action has the effect of raising the room temperature, but this is not
guaranteed. If cold air continuously comes into the room, the added heat may
not have the desired effect of raising the room temperature.
Each state has an associated action:
too cold turn the heating on
temperature OK turn the heating off.
7
www.funFruit.com
Intelligent Agent
Agent function and program
Agent function
An agent is completely specified by the agent function
[f: P* -> A]
mapping percept sequences to actions
Agent program
The agent program runs on the physical architecture to produce f
AGENT = ARCHITECTURE + PROGRAM
.
8
www.funFruit.com
Intelligent Agent
Vacuum-cleaner world
Percepts: location and state of the environment, e.g.,
[A,Dirty], [B,Clean]
Actions: Left, Right,
Suck, NoOp
9
www.funFruit.com
Intelligent Agent
Agent Performance
How do you define success? Need a performance measure
How the agent does successfully
E.g., 90% or 30% ?
Eg. reward agent with one point for each clean square at each time step
(could penalize for costs and noise)
The success can be.
Intelligent Agents, A discovery on How A Rational Agent ActsSheetal Jain
Because this concept of developing a smart set of design principles for building successful agents, systems that can reasonably be called intelligent, is Central to artificial intelligence we need to know its thinking and action approach. This PPT covers this topic in detail.
Go and take a look and share your suggestions with me.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
2. Outline
Artificial Intelligence a modern approach
2
⚫Agents and environments
⚫Rationality
⚫PEAS (Performance measure, Environment,
Actuators, Sensors)
⚫Environment types
⚫Agent types
3. Agents
Artificial Intelligence a modern approach
3
• An agent is anything that can be viewed as
perceiving its environment through sensors and
acting upon that environment through actuators
• Human agent:
– eyes, ears, and other organs for sensors;
– hands, legs, mouth, and other body parts for actuators
• Robotic agent:
– cameras and infrared range finders for sensors
– various motors for actuators
4. Agents and environments
Artificial Intelligence a modern approach
4
• The agent function maps from percept histories to
actions:
[f: P* 🡪 A]
• The agent program runs on the physical architecture to
produce f
• agent = architecture + program
5. Vacuum-cleaner world
Artificial Intelligence a modern approach
5
⚫ Percepts: location and contents, e.g., [A,Dirty]
⚫ Actions: Left, Right, Suck, NoOp
⚫ Agent’s function 🡪 look-up table
⚪ For many agents this is a very large table
Demo:
http://www.ai.sri.com/~oreilly/aima3ejava/aima3ejavademos.h
tml
6. Rational agents
Artificial Intelligence a modern approach
6
• Rationality
– Performance measuring success
– Agents prior knowledge of environment
– Actions that agent can perform
– Agent’s percept sequence to date
• Rational Agent: For each possible percept sequence, a
rational agent should select an action that is expected to
maximize its performance measure, given the evidence
provided by the percept sequence and whatever built-in
knowledge the agent has.
7. Examples of Rational Choice
⚫See File: intro-choice.doc
Artificial Intelligence a modern approach
7
8. Rationality
Artificial Intelligence a modern approach
8
⚫Rational is different from omniscience
⚪ Percepts may not supply all relevant information
⚪ E.g., in card game, don’t know cards of others.
⚫Rational is different from being perfect
⚪ Rationality maximizes expected outcome while perfection
maximizes actual outcome.
9. Autonomy in Agents
⚫Extremes
⚪ No autonomy – ignores environment/data
⚪ Complete autonomy – must act randomly/no program
⚫Example: baby learning to crawl
⚫Ideal: design agents to have some autonomy
⚪ Possibly become more autonomous with experience
The autonomy of an agent is the extent to which its
behaviour is determined by its own experience,
rather than knowledge of designer.
10. PEAS
Artificial Intelligence a modern approach
10
• PEAS: Performance measure, Environment,
Actuators, Sensors
• Must first specify the setting for intelligent agent
design
• Consider, e.g., the task of designing an automated
taxi driver:
– Performance measure: Safe, fast, legal, comfortable trip,
maximize profits
– Environment: Roads, other traffic, pedestrians, customers
– Actuators: Steering wheel, accelerator, brake, signal, horn
– Sensors: Cameras, sonar, speedometer, GPS, odometer,
engine sensors, keyboard
11. PEAS
Artificial Intelligence a modern approach
11
⚫Agent: Part-picking robot
⚫Performance measure: Percentage of parts in correct
bins
⚫Environment: Conveyor belt with parts, bins
⚫Actuators: Jointed arm and hand
⚫Sensors: Camera, joint angle sensors
12. PEAS
Artificial Intelligence a modern approach
12
⚫Agent: Interactive English tutor
⚫Performance measure: Maximize student's score on
test
⚫Environment: Set of students
⚫Actuators: Screen display (exercises, suggestions,
corrections)
⚫Sensors: Keyboard
14. Fully observable (vs. partially observable)
Artificial Intelligence a modern approach
14
⚫Is everything an agent requires to choose its actions
available to it via its sensors? Perfect or Full
information.
⚪ If so, the environment is fully accessible
⚫If not, parts of the environment are inaccessible
⚪ Agent must make informed guesses about world.
⚫In decision theory: perfect information vs. imperfect
information.
Cross
Word
Backgammon Taxi
driver
Part picking robot
Poke
r
Image analysis
Fully Fully Fully
Partially
Partially Partially
15. Deterministic (vs. stochastic)
Artificial Intelligence a modern approach
15
⚫Does the change in world state
⚪ Depend only on current state and agent’s action?
⚫Non-deterministic environments
⚪ Have aspects beyond the control of the agent
⚪ Utility functions have to guess at changes in world
Cross
Word
Backgammon Taxi
driver
Part picking robot
Poke
r
Image analysis
Cross
Word
Backgammon Taxi
driver
Part
Poke
r
Image analysis
Deterministic Deterministic
Stochastic
Stochastic
Stochastic Stochastic
16. Episodic (vs. sequential):
Artificial Intelligence a modern approach
16
⚫ Is the choice of current action
⚪ Dependent on previous actions?
⚪ If not, then the environment is episodic
⚫ In non-episodic environments:
⚪ Agent has to plan ahead:
⯍ Current choice will affect future actions
Cross
Word
Backgammon Taxi
driver
Part picking robot
Poke
r
Image analysis
Sequential
Sequential
Sequential
Sequential Episodic Episodic
17. Static (vs. dynamic):
Artificial Intelligence a modern approach
17
⚫Static environments don’t change
⚪ While the agent is deliberating over what to do
⚫Dynamic environments do change
⚪ So agent should/could consult the world when choosing actions
⚪ Alternatively: anticipate the change during deliberation OR make
decision very fast
⚫ Semidynamic: If the environment itself does not change
with the passage of time but the agent's performance
score does.
Cross
Word
Backgammon Taxi
driver
Part picking robot
Poke
r
Image analysis
Static Static Static Dynamic Dynamic Semi
Another example: off-line route planning vs. on-board navigation system
18. Discrete (vs. continuous)
Artificial Intelligence a modern approach
18
⚫ A limited number of distinct, clearly defined percepts and
actions vs. a range of values (continuous)
Cross
Word
Backgammon Taxi
driver
Part picking robot
Poke
r
Image analysis
Discrete Discrete Discrete Conti Conti Conti
19. Single agent (vs. multiagent):
Artificial Intelligence a modern approach
19
⚫ An agent operating by itself in an environment or there are
many agents working together
Cross
Word
Backgammon Taxi
driver
Part picking robot
Poke
r
Image analysis
Single Single Single
Multi
Multi
Multi
20. Artificial Intelligence a modern approach
Observable Deterministic Static
Episodic Agents
Discrete
Cross
Word
Backgammon
Taxi
driver
Part picking robot
Poke
r
Image analysis
Deterministic
Stochastic
Deterministic
Stochastic
Stochastic
Stochastic
Sequential
Sequential
Sequential
Sequential
Episodic
Episodic
Static
Static
Static
Dynamic
Dynamic
Semi
Discrete
Discrete
Discrete
Conti
Conti
Conti
Single
Single
Single
Multi
Multi
Multi
Summary.
Fully
Fully
Fully
Partially
Partially
Partially
22. Agent types
Artificial Intelligence a modern approach
22
⚫Four basic types in order of increasing generality:
⚪ Simple reflex agents
⚪ Reflex agents with state/model
⚪ Goal-based agents
⚪ Utility-based agents
⚪ All these can be turned into learning agents
⚪ http://www.ai.sri.com/~oreilly/aima3ejava/aima3ejavad
emos.html
24. Simple reflex agents
Artificial Intelligence a modern approach
24
⚫ Simple but very limited intelligence.
⚫ Action does not depend on percept history, only on current
percept.
⚫ Therefore no memory requirements.
⚫ Infinite loops
⚪ Suppose vacuum cleaner does not observe location. What do you do
given location = clean? Left of A or right on B -> infinite loop.
⚪ Fly buzzing around window or light.
⚪ Possible Solution: Randomize action.
⚪ Thermostat.
⚫ Chess – openings, endings
⚪ Lookup table (not a good idea in general)
⯍ 35100 entries required for the entire game
25. States: Beyond Reflexes
• Recall the agent function that maps from percept histories
to actions:
[f: P* 🡪 A]
⚫ An agent program can implement an agent function by
maintaining an internal state.
⚫ The internal state can contain information about the state
of the external environment.
⚫ The state depends on the history of percepts and on the
history of actions taken:
[f: P*, A*🡪 S 🡪A] where S is the set of states.
⚫ If each internal state includes all information relevant to
information making, the state space is Markovian.
Artificial Intelligence a modern approach
25
26. States and Memory: Game Theory
⚫If each state includes the information about the
percepts and actions that led to it, the state space has
perfect recall.
⚫Perfect Information = Perfect Recall + Full
Observability + Deterministic Actions.
Artificial Intelligence a modern approach
26
27. Model-based reflex agents
Artificial Intelligence a modern approach
27
⚫ Know how world evolves
⚪ Overtaking car gets closer from
behind
⚫ How agents actions affect the
world
⚪ Wheel turned clockwise takes you
right
⚫ Model base agents update their
state
28. Goal-based agents
Artificial Intelligence a modern approach
28
• knowing state and environment? Enough?
– Taxi can go left, right, straight
• Have a goal
⚪ A destination to get to
⚫Uses knowledge about a goal to guide its actions
⚪ E.g., Search, planning
29. Goal-based agents
Artificial Intelligence a modern approach
29
• Reflex agent breaks when it sees brake lights. Goal based agent
reasons
– Brake light -> car in front is stopping -> I should stop -> I should use brake
30. Utility-based agents
Artificial Intelligence a modern approach
30
⚫Goals are not always enough
⚪ Many action sequences get taxi to destination
⚪ Consider other things. How fast, how safe…..
⚫A utility function maps a state onto a real number
which describes the associated degree of
“happiness”, “goodness”, “success”.
⚫Where does the utility measure come from?
⚪ Economics: money.
⚪ Biology: number of offspring.
⚪ Your life?
32. Learning agents
Artificial Intelligence a modern approach
32
⚫ Performance element is
what was previously the
whole agent
⚫ Input sensor
⚫ Output action
⚫ Learning element
⚫ Modifies performance
element.
33. Learning agents
Artificial Intelligence a modern approach
33
⚫ Critic: how the agent is
doing
⚫ Input: checkmate?
⚫ Fixed
⚫ Problem generator
⚫ Tries to solve the problem
differently instead of
optimizing.
⚫ Suggests exploring new
actions -> new problems.
34. Learning agents(Taxi driver)
⚪ Performance element
⯍ How it currently drives
⚪ Taxi driver Makes quick left turn across 3 lanes
⯍ Critics observe shocking language by passenger and other drivers
and informs bad action
⯍ Learning element tries to modify performance elements for future
⯍ Problem generator suggests experiment out something called
Brakes on different Road conditions
⚪ Exploration vs. Exploitation
⯍ Learning experience can be costly in the short run
⯍ shocking language from other drivers
⯍ Less tip
⯍ Fewer passengers
Artificial Intelligence a modern approach
34
35. The Big Picture: AI for Model-Based Agents
Artificial Intelligence a modern approach
35
Action
Learning
Knowledge
Logic
Probability
Heuristics
Inference
Planning
Decision Theory
Game Theory
Reinforcement
Learning
Machine Learning
Statistics
36. The Picture for Reflex-Based Agents
Artificial Intelligence a modern approach
36
Action
Learning
Reinforcement
Learning
• Studied in AI, Cybernetics, Control Theory, Biology,
Psychology.
37. Discussion Question
⚫Model-based behaviour has a large overhead.
⚫ Our large brains are very expensive from an
evolutionary point of view.
⚫Why would it be worthwhile to base behaviour on a
model rather than “hard-code” it?
⚫For what types of organisms in what type of
environments?
Artificial Intelligence a modern approach
37
38. Summary
⚫ Agents can be described by their PEAS.
⚫ Environments can be described by several key properties:
64 Environment Types.
⚫ A rational agent maximizes the performance measure for
their PEAS.
⚫ The performance measure depends on the agent
function.
⚫ The agent program implements the agent function.
⚫ 3 main architectures for agent programs.
⚫ In this course we will look at some of the common and
useful combinations of environment/agent architecture.
Artificial Intelligence a modern approach
38