Types of Intelligent
Agents in Artificial
Intelligence
Simple Reflex, Model-Based, Goal-Based, Utility-
Based, and Learning Agents
Simple Reflex Agents
• Ignore the percept history and act only on the basis of
the current percept.
• Percept history: The record of everything an agent has
perceived so far.
• Based on condition-action rule: If condition is true,
action is taken.
• Works well only in fully observable environments.
• In partially observable environments, infinite loops may
occur.
• Can escape loops by randomizing actions.
Problems:
• Very limited intelligence.
• No knowledge of non-perceptual parts of the
state.
• Usually too large to generate and store.
• Rules must be updated if the environment
changes.
Model-Based Reflex Agents
• Handle partially observable environments using a
model of the world.
• Find rules whose conditions match the current
situation.
• Maintain an internal state updated by percepts.
• Store the current state describing unseen parts of the
world.
Goal-Based Agents
• Make decisions based on how far they are from their goals.
• Each action aims to reduce the distance to the goal.
• Can choose between multiple possibilities to reach the goal.
• Knowledge is explicit and modifiable, providing flexibility.
• Require search and planning.
• Behavior can easily be changed.
Utility-Based Agents
• Used when multiple alternatives exist to decide the best one.
• Choose actions based on preference (utility) for each state.
• Focus not only on achieving goals but also on efficiency, safety,
or cost.
• Utility represents agent 'happiness'.
• Selects action that maximizes expected utility.
• Utility function maps state to a real number representing
happiness.
Learning Agents
• Learn from past experiences and adapt automatically.
• Start with basic knowledge and improve through learning.
Main components:
1. Learning Element – Improves by learning from the environment.
2. Critic – Provides feedback about performance quality.
3. Performance Element – Chooses external actions.
4. Problem Generator – Suggests new and informative actions.

Types_of_Intelligent_Agents.pptx pdf 2025 latest

  • 1.
    Types of Intelligent Agentsin Artificial Intelligence Simple Reflex, Model-Based, Goal-Based, Utility- Based, and Learning Agents
  • 2.
    Simple Reflex Agents •Ignore the percept history and act only on the basis of the current percept. • Percept history: The record of everything an agent has perceived so far. • Based on condition-action rule: If condition is true, action is taken. • Works well only in fully observable environments. • In partially observable environments, infinite loops may occur. • Can escape loops by randomizing actions.
  • 3.
    Problems: • Very limitedintelligence. • No knowledge of non-perceptual parts of the state. • Usually too large to generate and store. • Rules must be updated if the environment changes.
  • 4.
    Model-Based Reflex Agents •Handle partially observable environments using a model of the world. • Find rules whose conditions match the current situation. • Maintain an internal state updated by percepts. • Store the current state describing unseen parts of the world.
  • 5.
    Goal-Based Agents • Makedecisions based on how far they are from their goals. • Each action aims to reduce the distance to the goal. • Can choose between multiple possibilities to reach the goal. • Knowledge is explicit and modifiable, providing flexibility. • Require search and planning. • Behavior can easily be changed.
  • 6.
    Utility-Based Agents • Usedwhen multiple alternatives exist to decide the best one. • Choose actions based on preference (utility) for each state. • Focus not only on achieving goals but also on efficiency, safety, or cost. • Utility represents agent 'happiness'. • Selects action that maximizes expected utility. • Utility function maps state to a real number representing happiness.
  • 7.
    Learning Agents • Learnfrom past experiences and adapt automatically. • Start with basic knowledge and improve through learning. Main components: 1. Learning Element – Improves by learning from the environment. 2. Critic – Provides feedback about performance quality. 3. Performance Element – Chooses external actions. 4. Problem Generator – Suggests new and informative actions.