The structure of agents
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

The structure of agents

on

  • 4,909 views

 

Statistics

Views

Total Views
4,909
Views on SlideShare
4,909
Embed Views
0

Actions

Likes
0
Downloads
123
Comments
2

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

The structure of agents Presentation Transcript

  • 1. THE STRUCTURE OF AGENTS
    K.MAHALAKSHMI.,AP/CSE
    APEC
  • 2. THE STRUCTURE OF AGENTS
    Agent Program:
    It implements the agent function mapping percepts to actions
    • Architecture:
    Agent program will run on some sort of computing device with physical sensors called architecture.
    Agent=architecture+program
    K.MAHALAKSHMI.,AP/CSE
  • 3. THE STRUCTURE OF AGENTS
    Agent programs:
    • The agent programs take the current percept as input from the sensors and return an action to the actuators.
    • 4. The agent program takes just the current percept as input because nothing more is available from the envirnoment.
    • 5. If the agent’s actions depend on the entire percept sequence, the agent will have to remember the percepts.
    K.MAHALAKSHMI.,AP/CSE
  • 6. Continue…
    Four kinds of agent programs:
    • Simple reflex agents
    • 7. Model based reflex agents
    • 8. Goal-based agents
    • 9. Utility based agents
    K.MAHALAKSHMI.,AP/CSE
  • 10. Simple reflex agents
    • These are simplest kind of agents that select actions on the basis of current percept, ignoring the rest of the percept history.
    • 11. The agent program for a simple reflex agent in 2 state vacuum environment is as follows
    function Reflex-Vacuum-Agent( [location,status]) returns an action
    if status = Dirty then return Suck
    else if location = A then return Right
    else if location = B then return Left
    K.MAHALAKSHMI.,AP/CSE
  • 12. Simple reflex agentsfig: schematic diagram of a simplex reflex agent
    K.MAHALAKSHMI.,AP/CSE
  • 13.
    • Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments.
    • 14. Escape from infinite loops is possible if the agent can randomize its actions.
    K.MAHALAKSHMI.,AP/CSE
  • 15. Model based reflex agents
    K.MAHALAKSHMI.,AP/CSE
  • 16. Model based reflex agents
    The most effective way to handle partial observability is for the agent to keep track of the part of the world it can’t see now.
    The agent should maintain some sort of internal state that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state
    Model based agent
    The knowledge about “how the world works” whether implemented in simple boolean circuits or in complete scientific theories is called model of the world.
    The agent that uses such a model is called a model-based agent
    K.MAHALAKSHMI.,AP/CSE
  • 17. Goal-based agents
    K.MAHALAKSHMI.,AP/CSE
  • 18. Goal:
    The agent needs some sort of goal information that describes situations that are desirable.
    Example: being at the passenger’s destination
    The agent program can combine this information about the results of possible actions inorder to choose actions that achieve the goals.
    Search and planning are the subfields of AI devoted to finding action sequence that achieve the agent’s goals.
    Although the goal-based agent appears less efficient, it is more flexible because the knowledge that supports its decisions is represented explicitly and can be modified.
    K.MAHALAKSHMI.,AP/CSE
  • 19. Utility based agents
    K.MAHALAKSHMI.,AP/CSE
  • 20. Utility based agents
    Goals alone are not really enough to generate high-quality behavior in most environment.
    Utility-if one world state is preferred to another, then it has higher utility for the agent.
    Utility function
    It maps a state (or sequence of states) onto a real number, which describes the associated degree of happiness.
    A complete specification of the utility function allows rational decision in two kinds of cases where goals are inadequate.
    When there are conflicting goals, only some of which can be achieved, the utility function specifies appropriate tradeoff.
    When there are several goals that the agent can aim for utility provides a way in which the likelihood of success can be weighed up against the importance of the goals.
    K.MAHALAKSHMI.,AP/CSE
  • 21. Learning agents
    K.MAHALAKSHMI.,AP/CSE
  • 22. Learning agents
    Method to build learning machines and then to teach them, which is used in many areas of AI for creating state-of-the-art systems.
    Advantage ->it allows agent to operate in initially unknown environment and to become more competent than its initial knowledge alone might allow.
    Conceptual components of learning agent
    Learning element- responsible for making improvements.
    Performance element-responsible for selecting external actions.
    Critic-the learning element uses feedback from the critic on how the agent is doing and determines how the performance element should be modified to do better in the future.
    K.MAHALAKSHMI.,AP/CSE
  • 23. Problem generator-responsible for suggesting actions that will lead to new and informative experiences.
    The performance standard distinguishes part of the incoming percept as a reward(or penalty) that provides direct feedback on the quality of the agent’s behavior
    Example: hard-wired performance standards such as pain and hunger in animals can be understood in this way.
    K.MAHALAKSHMI.,AP/CSE