Agent properties


Published on

MAS course at URV, lecture 3. Based on Rosenschein's slides from Wooldridge book.

Published in: Technology
  • Be the first to comment

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Agent properties

  1. 1. LECTURE 3: AGENT PROPERTIES Artificial Intelligence II – Multi-Agent Systems Introduction to Multi-Agent Systems URV, Winter-Spring 2010 1
  2. 2. Overview What properties should an agent have? What kind of things are not agents Objects Expert Systems Common problems in agent development 2
  3. 3. Defining agents Many possible definitions of “agents” Each author provides a set of characteristics/properties that are considered important to the notion of agenthood They can be divided into Internal: determine the actions within an agent External: affect the interaction of the agent with other (computational/human) agents 3
  4. 4. 1-Flexibility An intelligent agent is a computer system capable of flexible action in some dynamic environment By flexible, we mean: reactive proactive social 4
  5. 5. 2-Reactivity If a program’s environment is guaranteed to be fixed, the program does not need to worry about its own success or failure – it just executes blindly Example of fixed environment: compiler The real world is not like that: things change, information is incomplete. Many (most?) interesting environments are dynamic Multi-agent world Software is hard to build for dynamic domains: programs must take into account possibility of failure A reactive system is one that maintains an ongoing interaction with its environment, and responds to changes that occur in it (in time for the response to be useful) 5
  6. 6. Ways to achieve reactivity [Recall lecture from last week] Reactive architectures [Situation – Action] rules Layered, behaviour-based architectures Deliberative architectures Symbolic world model, long-term goals Reasoning, planning Hybrid architectures Reactive layer + deliberative layer 6
  7. 7. 3-Proactiveness Reacting to an environment is relatively easy (e.g., stimulus → response rules) But we generally want agents to do things for us, act on our behalf Hence they must exhibit goal-directed behaviour Agents should be proactive 7
  8. 8. Aspects of proactiveness Generating and attempting to achieve goals Behaviour not driven solely by events Taking the initiative when appropriate Executing actions/giving advice/making recommendations/making suggestions without an explicit user request Recognizing opportunities on the fly Available resources Chances of cooperation 8
  9. 9. Example of proactiveness (I) Personal Assistant Agent, running continuously on our mobile phone Location tracking (e.g. GPS) Knows our preferences Cultural activities Food Can proactively warn us when we are close to an interesting cultural activity, or if it is lunch time and we are close to a restaurant that offers our favourite food [Turist@] 9
  10. 10. 10
  11. 11. Example of proactiveness (II) Set of agents embedded in the home of an old or disabled person Detects the movement of the person around the house and the actions he/she performs Learns the usual daily patterns of behaviour Can detect abnormal situations, and proactively send warnings/alarms to family/health services E.g. Too much time in the same position, long time in the bathroom, whole day without going into the kitchen, ... 11
  12. 12. 12
  13. 13. Balancing Reactive and Goal-Oriented Behaviour We want our agents to be reactive, responding to changing conditions in an appropriate (timely) fashion We want our agents to systematically work towards long-term goals These two considerations can be at odds with one another Designing an agent that can balance the two remains an open research problem [recall hybrid architectures from last week] 13
  14. 14. 4-Social Ability The real world is a multi-agent environment: we cannot go around attempting to achieve goals without taking others into account Some goals can only be achieved with the cooperation of others Similarly for many computer environments: witness the Internet Social ability in agents is the ability to interact with other agents (and possibly humans) via some kind of agent-communication language, and perhaps cooperate with others 14
  15. 15. Requirements for communication Agent communication language FIPA-ACL Message types Message attributes Agent communication protocols Languages to represent the content of the messages between agents Shared ontologies World-wide standards 15
  16. 16. High-level activities Communication is a first step towards sophisticated activities: Coordination How to divide a task between a group of agents Distributed planning Cooperation Share intermediate results Share resources Distributed problem solving Negotiation [e-commerce] Conditions in an economic transaction Find the agent that can provide a service with the best conditions [Second part of the course] 16
  17. 17. Other aspects related to communication Security issues Authentication Encryption Finding other agents that provide services, matchmaking Quite difficult in open systems Trust To what extent can we trust the other agents of the system? Reputation models 17
  18. 18. 5-Rationality An agent will act in order to achieve its goals It will not act in such a way as to prevent its goals being achieved At least insofar as its beliefs permit For instance, it will not apply deductive procedures without a purpose, as in CLIPS style 18
  19. 19. 6-Reasoning capabilities Essential aspect for intelligent/rational behaviour Knowledge base with beliefs on the world Ability to infer and extrapolate based on current knowledge and experiences Capacity to make plans This is the characteristic that distinguishes an intelligent agent from a more “robotic” reactive-like agent 19
  20. 20. Kinds of reasoning in AI (I) Knowledge-based systems / expert systems Reasoning techniques especialised in the system’s domain Forward-chaining, backward-chaining, hybrid Rule-based systems Knowledge is represented as a set of rules Detect – Select – Apply execution cycle CLIPS 20
  21. 21. Kinds of reasoning in AI (II) Case-based reasoning Using similarity to solved problems Approximate reasoning Fuzzy logic, Bayesian networks, probabilities, etc 21
  22. 22. 7-Learning Basic component for an intelligent behaviour It can be considered as the [automatic] improvement of the performance of the agent over time Machine Learning: large area within AI 22
  23. 23. Ways to improve Make less mistakes Do not repeat computations performed in the past Find solutions more quickly Find better solutions Solve a wider range of problems Learn user preferences and adapt the behaviour accordingly 23
  24. 24. Learning tourist profiles in Turist@ The tourist may fill an initial questionnaire Analyze the tourist queries E.g. Science-fiction films Analyze tourist votes Museum of Modern Art: very good Cluster tourists with similar preferences Recommend activities highly valued by tourists in the same class 24
  25. 25. Advantages of learning systems Can adapt better to a dynamic environment, or to unknown situations Without the need of an exhaustive set of rules defined at design time Can leverage previous positive/negative experiences to act more intelligently in the future 25
  26. 26. 8-Autonomy Very important difference between agents and traditional programs Ability to pursue goals in an autonomous way, without direct continuous interaction/commands from the user Given a vague/imprecise goal, the agent must determine the best way to attain it 26
  27. 27. Autonomous decisions Given a certain goal ... Which actions should I perform? How should I perform these actions? Should I seek/request/(buy !) help/collaboration from other agents? Less work for the human user !!! 27
  28. 28. Autonomy requirements To have autonomy, it is necessary for an agent … To have a control on its own actions An agent cannot be obliged to do anything To have a control on its internal state The agent’s state cannot be externally modified by another agent To have the appropriate access to the resources and capabilities needed to perform its tasks E.g. access to Internet, communication channels with other agents 28
  29. 29. Autonomy limitations Sometimes the user may restrict the autonomy of the agent For instance, the agent could have autonomy to search in Internet for the best place to buy a given book ... … but the agent could not have the autonomy to actually buy the book, using the credit card details of the user 29
  30. 30. Issues Autonomy also raises complex issues: Legal issues Who is the responsible of the agent’s actions? Ethical issues To what extent should decisions be delegated to computational agents? E.g. agents in medical decision support systems 30
  31. 31. 9-Temporal continuity Agents are continuously running processes Running active in the foreground or sleeping/passive in the background until a certain message arrives Not once-only computations or scripts that map a single input to a single output and then terminate 31
  32. 32. 10-Mobility Mobile agents can be executing in a given computer and, at some point in time, move physically through a network (e.g. Internet) to another computer, and continue their execution there In most applications the idea is to go somewhere to perform a given task and then come back to the initial host with the obtained results 32
  33. 33. Example of use: access to a DB Imagine there is a DB in Australia with thousands of images, and we need to select some images with specific properties We have to make some computations on the images to decide whether to select them or not 33
  34. 34. Option 1: remote requests The agent in our computer makes hundreds of requests to the agent managing the DB in Australia Continuous connection required Heavy use of the bandwidth All computations made on our computer 34
  35. 35. Option 2: local access Establish connection Send a specialised agent to the Australian computer holding the DB <connection can be dismissed here> Make local accesses to the DB, analysing the images there Re-establish connection Our agent comes back with the selected images 35
  36. 36. Another example: tourist travel 36
  37. 37. Problems of mobile agents Security How can I accept mobile agents in my computer? (virus !!!) Privacy Is it secure to send an agent with the details of my credit card, or with my personal preferences? Technical management Each computer has to be able to “pack” an agent, send it to another machine, receive agents from other machines, “validate” them, and letting them execute locally. 37
  38. 38. Kinds of mobility Do not confuse Mobile agents Agents that can move from one computer to another Agents running in mobile devices Agents executing in portable devices such as PDAs, Tablet PCs, portable computers or mobile phones 38
  39. 39. 11- Other properties … Benevolence An agent will always try to do what is asked of it Veracity An agent will not knowingly communicate false information Character Agents must seem honest, trustable, … Emotion Agents must exhibit emotional states, such as happiness, sadness, frustration, … 39
  40. 40. Relationships between properties More learning => more reactivity More reasoning => more proactivity More learning => more autonomy Less autonomy => less proactivity More reasoning => more rationality 40
  41. 41. Conclusions It is almost impossible for an agent to have all those properties !!! Most basic properties: Autonomy Reactiveness Reasoning and learning Communication Task in practical exercise: think about the properties you want your agents to have !!! 41
  42. 42. Agents versus related technologies If agents are autonomous entities that display an intelligent behaviour, what makes them so different from other well known techniques, like object-oriented programming or intelligent (knowledge- based) systems? 42
  43. 43. Agents and Objects Are agents just objects by another name? Object: encapsulates some state communicates via message passing has methods, corresponding to operations that may be performed on the state 43
  44. 44. Agents vs Objects (I) Agents are autonomous Agents embody a stronger notion of autonomy than objects They decide for themselves whether or not to perform an action on request from another agent When a method is invoked on an object, it is always executed 44
  45. 45. Agents vs Objects (II) Agents are smart, intelligent Capable of flexible (reactive, proactive, social) behaviour The standard object model has nothing to say about such types of behaviour Agents are active A multi-agent system is inherently multi- threaded, in that each agent is assumed to have at least one thread of active control 45
  46. 46. Agents vs Objects (III) Objects Agents Encapsulate state Encapsulate state, and control over it control over it via via methods actions and goals Passive – have no Active – can decide control over a when to act and how method execution Non autonomous Autonomous Reactive to events Proactive 46
  47. 47. In summary … Objects do it for free An object cannot refuse a method invocation Agents do it because they want to The service requester is authorised, the agent has enough resources available, the action is convenient for the agent, … Agents do it for money The agent can get an economic profit 47
  48. 48. Agents and Expert Systems Aren’t agents just expert systems by another name? Expert systems contain typically disembodied ‘expertise’ about some (abstract) domain of discourse (e.g. blood diseases) Example: MYCIN knows about blood diseases in humans It has a wealth of knowledge about blood diseases, in the form of rules A doctor can obtain expert advice about blood diseases by giving MYCIN facts, answering questions, and posing queries 48
  49. 49. Agents and Expert Systems Main differences: agents situated in an environment: MYCIN is not aware of the world — the only information that it obtains is by asking questions to the user agents act: MYCIN does not operate on patients Sometimes an expert system is agentified and included in a MAS 49
  50. 50. Development of agent-oriented systems Agents have many interesting and positive properties, but … It is difficult to design, implement, deploy and maintain a MAS There aren’t any firmly established agent- oriented software engineering methodologies 50
  51. 51. Pitfalls of Agent Development There are several potential problems that should be carefully considered when starting an agent-based approach The main problem categories are: conceptual analysis and design micro (agent) level macro (society) level implementation 51
  52. 52. Overselling Agents (I) Agents are not magic! If you can’t do it with ordinary software, you probably can’t do it with agents No evidence that any system developed using agent technology could not have been built just as easily using non-agent techniques 52
  53. 53. Overselling agents (II) Agents may make it easier to solve certain classes of problems…but they do not make the impossible possible Agents are not AI by a back door Don’t completely equate agents and AI 53
  54. 54. Universal Solution? Agents have been used in a wide range of applications, but they are not a universal solution For many applications, conventional software paradigms (e.g., OO) can be more appropriate Given a problem for which an agent and a non-agent approach appear equally good, prefer the non-agent solution! In summary: danger of believing that agents are the right solution to every problem 54
  55. 55. Don’t Know Why You Want Agents Often, projects appear to be going well (“We have agents!”) , but there is no clear vision about where to go with them. The lesson: understand your reasons for attempting an agent development project, and what you expect to gain from it Ask youself: do we really need agent technology to solve this problem? 55
  56. 56. Don’t Know What Agents Are Good For Having developed some agent technology, you search for an application to use them Putting the cart before the horse! The lesson: be sure you understand how and where your new technology may be most usefuly applied Do not attempt to apply it to arbitrary problems and resist temptation to apply it to every problem 56
  57. 57. Confuse Prototypes with Systems Prototypes are easy (particularly with nice GUI builders!) Field-tested production systems are hard For instance, how will the agent-based software be maintained? (e.g. in a hospital) The process of scaling up from a single- machine multi-threaded Java application to a multi-user distributed system is much harder than it appears 57
  58. 58. A Silver Bullet for Soft. Eng. (I) The holy grail of software engineering is a “silver bullet”: an order of magnitude improvement in software development Technologies promoted as the silver bullet: COBOL Automatic programming Expert systems Graphical/visual programming Object Oriented Programming Java 58
  59. 59. A Silver Bullet for Soft. Eng. (II) Agent technology is not a silver bullet It is true that there are good reasons to believe that agents are a useful way of tackling some problems … … but these arguments remain largely untested in practice 59
  60. 60. Don’t Forget - It’s Distributed (I) Distributed systems = one of the most complex classes of computer systems to design and implement Not only different modules, but the interconnection between them Multi-agent systems tend to be distributed! Problems of distribution do not go away, just because a system is agent-based 60
  61. 61. Don’t Forget - It’s Distributed (II) A typical multi-agent system will be more complex than a typical distributed system Autonomous entities Conflicts between entities Dynamic forms of cooperation Emergent behaviour Recognize distributed systems problems Make use of DS-DAI expertise 61
  62. 62. Don’t exploit concurrency (I) Many ways of cutting up any problem Functional decomposition Organizational decomposition Physical decomposition Resource-based decomposition One of the most obvious features of a poor multi-agent design is that the amount of concurrent problem solving is comparatively small or even in extreme cases non-existent 62
  63. 63. Don’t exploit concurrency (II) Serial processing in distributed system! Agent A makes a task, sends results to B Agent B makes another task, sends results to C ... Only ever a single thread of control: concurrency, one of the most important potential advantages of multi-agent solutions, is not exploited If you don’t exploit concurrency, why have an agent solution? 63
  64. 64. Want Your Own Architecture (I) Agent architectures: designs for building agents Many agent architectures have been proposed over the years Great temptation to imagine you need your own Driving forces behind this belief: “Not designed here” mindset Intellectual property 64
  65. 65. Want Your Own Architecture (II) Problems: Architecture development takes years No clear payback (too much effort to reinvent the wheel) Some options: Buy one Take one off the shelf Do without (!) 65
  66. 66. Use Too Much AI Temptation to focus on the agent-specific aspects of the application Result: an agent framework too overburdened with experimental AI techniques to be usable Fuelled by “feature envy”, where one reads about agents that have the ability to learn, plan, talk, sing, dance… Resist the temptation to believe such features are essential in your agent system The lesson: build agents with a minimum of AI; as success is obtained with such systems, progressively evolve them into richer systems 66
  67. 67. Not Enough AI Don’t call your on-off switch an agent! Be realistic: it is becoming common to find everyday distributed systems referred to as MAS Problems: leads to the term “agent” losing any meaning raises expectations of software recipients leads to cynicism on the part of software developers 67
  68. 68. See agents everywhere “Pure” Agent-Oriented system = everything is an agent! Agents for addition, subtraction,… Naively viewing everything as an agent is inappropriate Combine agent and non-agent parts of an application Choose the right grain size More than 10 agents could already be a big system, in some circumstances 68
  69. 69. Too Many Agents Individual agents don’t have to be very complex to generate complex behaviour at the system level A large number of agents: Can lead to interesting and unexpected emergent functionality, but … … also to chaotic behaviour Lessons Keep interactions to a minimum Keep protocols simple 69
  70. 70. Too few agents Some designers imagine a separate agent for every possible task Others don’t recognize value of a multi- agent approach at all One “all powerful” centralised planning and controller agent Result is like OO program with 1 class Lesson: choose agents of the right size 70
  71. 71. System is anarchic Cannot simply bundle a group of agents together (except possibly in simulations) Most agent systems require system-level engineering For large systems, or for systems in which the society is supposed to act with some commonality of purpose, this is particularly important Organization structure (even in the form of formal communication channels) is essential 71
  72. 72. Ignore Available Standards There are no established agent standards Developers often believe they have no choice but to design and build all agent-specific components from scratch But there are some de facto standards Examples: CORBA – communication middleware OWL – ontology language FIPA agent architecture FIPA-ACL – agent communication language 72
  73. 73. Readings for this week M.Wooldridge: An introduction to MultiAgent Systems – beginning chapter 2, section 10.3 Pitfalls of Agent-Oriented Development (M.Wooldridge, N.Jennings) PFC Turist@ (Alex Viejo) 73