• Simple reflexagents act based solely on current perceptions
using condition-action rules.
• These agents respond directly to stimuli without considering
past experiences or potential future states.
• They operate on basic "if-then" logic: if a specific condition is
detected, execute a corresponding action.
4.
• Key Features:
•No memory of past states
• No model of how the world works
• Purely reactive behavior
• Function best in fully observable environments
For Example:Traffic light control systems that change signals
based on fixed timing.
• Model-based reflexagents maintain an internal
representation of the world, allowing them to track aspects
of the environment they cannot directly observe.
• This internal model helps them make more informed
decisions by considering how the world evolves and how
their actions affect it.
7.
• A modelof the world to choose their actions. They maintain
an internal state.
• Model knowledge about “how the things happen in the
−
world”.
• Internal State It is a representation of unobserved aspects
−
of current state depending on percept history.
• Updating the state -requires the information about How the
world evolves. How the agent’s actions affect the world
8.
• Key Features:
•Track the world's state over time
• Infer unobserved aspects of current states
• Function effectively in partially observable environments
• Still primarily reactive, but with contextual awareness
For example: Robot vacuum cleaners that map rooms and
tracks cleaned areas.
• Goal-based agentsplan their actions with a specific
objective in mind.
• Unlike reflex agents that respond to immediate stimuli,
goal-based agents evaluate how different action sequences
might lead toward their defined goal, selecting the path that
appears most promising.
11.
• Key Features:
•Employ search and planning mechanisms
• Evaluate actions based on their contribution toward goal
achievement
• Consider future states and outcomes
• May explore multiple possible routes to a goal
For example, Logistics routing agents that find optimal
delivery routes based on factors like distance and time. They
continually adjust to reach the most efficient route.
• Utility-based agentsextend goal-based thinking by
evaluating actions based on how well they maximize a utility
function—essentially a measure of "happiness" or
"satisfaction."
• Choose actions based on a preference (utility) for each state
14.
• Key Features:
•Balance multiple, sometimes conflicting objectives
• Handle probabilistic and uncertain environments
• Evaluate actions based on expected utility
• Make rational decisions under constraints
• For example: Financial portfolio management agents that
evaluate investments based on factors like risk, return and
diversification operate by choosing options that provide the
most value.
• A learningagent in AI is the type of agent which can learn from its
past experiences, or it has learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt
automatically through learning.
• A learning agent has mainly four conceptual components, which
are:
1.Learning element: It is responsible for making improvements by
learning from environment
2.Critic: Learning element takes feedback from critic which describes
that how well the agent is doing with respect to a fixed performance
standard
17.
3.Performance element: Itis responsible for selecting
external action.
4.Problem generator: This component is responsible for
suggesting actions that will lead to new and informative
experiences.
Function: REFLEX-VACUUM-AGENT([location, status])
•Inputs:
• location: Current location of the vacuum agent (either A or B).
• status: Whether the current square is Dirty or Clean.
• Outputs:
• An action (Suck, Left, or Right).
Algorithm (step by step)
• If status = Dirty then return Suck
• If the agent finds itself on a dirty square, it cleans it by performing the
Suck action.
• Else if location = A then return Right
• If the square is clean and the agent is in location A, it moves Right to
location B.
• Else if location = B then return Left
• If the square is clean and the agent is in location B, it moves Left to
location A.
Behavior Example
• If the agent is at [A, Dirty]
Action:
→ Suck
• If the agent is at [A, Clean]
Action:
→ Right
• If the agent is at [B, Dirty]
Action:
→ Suck
• If the agent is at [B, Clean]
Action:
→ Left