Intelligent Agents (Introduction)
•An agent: perceives environment, makes
decisions, performs actions
– Goal: act intelligently to achieve objectives
– Used in: self-driving cars, chatbots, robots, smart
assistants
2.
Why Agents Matter
•Agents form the foundation of AI systems
– They automate tasks and solve complex problems
– Used in fields like healthcare, transport, gaming,
finance, robotics
3.
Characteristics of IntelligentAgents
• Autonomy: Operates without human control
– Reactivity: Responds quickly to environmental
changes
– Proactivity: Takes goal oriented actions
‑
– Social Ability: Communicates with humans/other
agents
4.
Agent Interaction Cycle
•Perception: Collect data using sensors
– Reasoning: Process information and decide action
– Action: Execute using actuators
– Cycle repeats continuously for intelligent behavior
5.
Understanding Rationality
• Rationality= choosing the best decision based
on available information
– Considers performance measure, knowledge,
history, and possible actions
– A rational agent aims for maximum expected
success
6.
Factors Influencing Rationality
•Performance Measure: Defines success
criteria
– Percept Sequence: Everything sensed so far
– Knowledge: What the agent knows about
environment
– Available Actions: Options the agent can take
7.
Types of Rationality
•Perfect Rationality: Always chooses the
optimal action
– Bounded Rationality: Makes good-enough
decisions with limitations
– Instrumental: Selects actions that best achieve
goals
– Epistemic: Builds accurate beliefs using evidence
– Practical: Balances goals, risks, and resources
8.
Task Environment Properties(1)
• Fully vs Partially Observable: Complete vs
limited information
– Deterministic vs Stochastic: Predictable vs random
outcomes
– Episodic vs Sequential: Independent vs dependent
actions
9.
Task Environment Properties(2)
• Static vs Dynamic: Environment changes or
stays still
– Discrete vs Continuous: Limited vs infinite states
and actions
– Single vs Multi-Agent: One vs multiple interacting
agents
10.
PEAS Framework
• PerformanceMeasure: Defines success for the
agent
– Environment: Where the agent operates
– Actuators: How the agent performs actions
– Sensors: How the agent gathers data
11.
Types of Agents
•Simple Reflex: Acts based only on current
input
– Model-Based: Uses memory + world model
– Goal-Based: Chooses actions that achieve goals
– Utility-Based: Selects most beneficial outcome
– Learning Agent: Improves performance over time
12.
Example: Taxi DriverAgent
• Partially observable: Can't see everything at
once
– Dynamic: Traffic keeps changing
– Multi-Agent: Other cars and pedestrians
– Continuous: Speed, road conditions, distances
– Sequential: Each decision affects future states
13.
Example: Chess withClock
• Fully observable: Board is fully visible
– Multi-agent: Two players compete
– Deterministic: No randomness, predictable moves
– Discrete: Limited number of moves
– Sequential: Each move affects future game state
14.
Summary
• Agents sense,reason, and act to achieve goals
– Rationality = best action given information and
limits
– Task environments vary in observability, dynamics,
and structure
– Agent types range from simple reflex to advanced
learning agents