Multi-agent systems

  • 1,507 views
Uploaded on

 

More in: Education , Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
1,507
On Slideshare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
18
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. R. AkerkarAmerican University of Armenia Yerevan, Armenia Multiagent Systems: R. Akerkar 1
  • 2. Outline1. History and perspectives on 1. History and perspectives onmultiagents2. Agent Architecture2. Agent Architecture3. Agent Oriented Software Engineering4. Mobility4. Mobility5. Autonomy and Teaming Multiagent Systems: R. Akerkar 2
  • 3. Definitions An agent is an entity whose state is viewed as consisting  of mental components such as beliefs, capabilities,  p , p , choices, and commitments. [Yoav Shoham, 1993]. An entity is a software agent if and only if it  communicates correctly in an agent communication  language. [Genesereth and Ketchpel, 1994] language  [Genesereth and Ketchpel  1994]. Intelligent agents continuously perform three functions:  perception of dynamic conditions in the environment;  action to affect conditions in the environment; and  reasoning to interpret perceptions, solve problems, draw  l bl d inferences, and determine actions. [Hayes‐Roth, 1995] Multiagent Systems: R. Akerkar 3
  • 4. Definitions.  An agent is anything that can be viewed as    An agent is anything that can be viewed as  (a)Perceiving its environment, and (b) Acting upon  that environment [Russell and Norvig, 1995].   A computer system that is situated in some  environment and is capable of autonomous action in  its environment to meet its design objectives.  i   i      i  d i   bj i   [Wooldridge, 1999] Multiagent Systems: R. Akerkar 4
  • 5. Agents: A working definition An agent is a computational system that interacts with one or more counterparts or real‐world systems   ith        t t     l ld  twith the following key features to varying degrees:• Autonomy• Reactiveness• Pro‐activeness• Social abilitiese.g., autonomous robots, human assistants, service agentsThe need is for automation and distributed use of online resources Multiagent Systems: R. Akerkar 5
  • 6. Test of Agenthood [Huhns and Singh, 1998] “A system of distinguished agents should  “A  t   f di ti i h d  t   h ld substantially change semantically if a distinguished  agent is added.” t i   dd d ” Multiagent Systems: R. Akerkar 6
  • 7. Agents vs. Objects g j “Objects with attitude” [Bradshaw, 1997] “Obj   i h  i d ” [B d h   ] Agents are similar to objects since they are  i il bj i h computational units that encapsulate a state  and communicate via message passing d  i t   i     i Agents differ from objects since they have a  strong sense of autonomy and are active  versus passive.   i Multiagent Systems: R. Akerkar 7
  • 8. Agent Oriented Programming, YoavShohamAOP principles:1. The state of an object in OO p g j programming has no g g generic structure. The state of an agent has a “mentalistic” structure: it consists of mental components such as beliefs and commitments commitments.2. Messages in object-oriented programming are coded in an g j p g g application-specific ad-hoc manner. A message in AOP is coded as a “speech act” according to a standard agent communication language that is application independent application-independent. Multiagent Systems: R. Akerkar 8
  • 9. Agent Oriented Programming  Extends Peter Chen’s ER model,  E t d P t Ch ’ ER d l Gerd Wagner• Different entities may belong to different epistemic categories. There are  agents, events, actions, commitments, claims, and objects.• We distinguish between physical and communicative actions/events.  Actions create events, but not all events are created by actions.• Some of these modeling concepts are indexical, that is, they depend on  the perspective chosen: in the perspective of a particular agent, actions  of other agents are viewed as events, and commitments of other agents  are viewed as claims against them. Multiagent Systems: R. Akerkar 9
  • 10. Agent Oriented Programming  Extends Peter Chen’s ER model,  Gerd Wagner g• In the internal perspective of an agent, a commitment refers to a specific action  to be performed in due time, while a claim refers to a specific event that is  created by an action of another agent, and has to occur in due time.• Communication is viewed as asynchronous point‐to‐point message passing.  We take the expressions receiving a message and sending a message as  synonyms of perceiving a communication event and performing a  communication act.• There are six designated relationships in which specifically agents, but not  objects, participate: only an agent perceives environment events, receives and  sends messages, does physical actions, has Commitment to perform some  action in due time, and has Claim that some action event will happen in due  time. Multiagent Systems: R. Akerkar 10
  • 11. Agent Oriented Programming Extends Peter Chen’s ER model, E t d P t Ch ’ ER d lGerd WagnerAn institutional agent consists of a certain number of (institutional, artificial and human) internal agents acting on behalf of it. An institutional agent can only perceive and act through its internal agents.Within an institutional agent, each internal agent has certain rights and duties.There are three kinds of duties: an internal agent may have the duty to full commitments of a certain type, the duty to monitor claims of a certain type, or  yp , y yp ,the duty to react to events of a certain type on behalf of the organization.A right refers to an action type such that the internal agent is permitted to pperform actions of that type on behalf of the organization. yp g Multiagent Systems: R. Akerkar 11
  • 12. Agent TypologyHuman agents: Person, Employee, Student, Nurse, or PatientArtificial agents: owned and run by a legal entity Institutional agents: a bank or a hospitalSoftware agents: Agents designed with softwareInformation agent: D t bI f ti t Data bases and th i t d the internet tAutonomous agents: Non-trivial independenceInteractive/Interface agents: Designed for interactionAdaptive agents: Non-trivial ability for changeMobile agents: code and logic mobility g g y Multiagent Systems: R. Akerkar 12
  • 13. Agent TypologyCollaborative/Coordinative agents: Non-trivial abilityfor coordination, autonomy, and sociabilityReactive agents: No internal state and shallowreasoningHybrid agents: a combination of deliberative andreactive componentsHeterogenous agents: A system with various agentsub-components bIntelligent/smart agents: Reasoning and intentionalnotionsWrapper agents: Facility for interaction with non-agents Multiagent Systems: R. Akerkar 13
  • 14. Multi‐agencyA multi‐agent system is a system that is made up of multiple agents with the following key features among  p g g y gagents to varying degrees of commonality and adaptation:• S i l  ti Social rationality lit• Normative patterns• System of Values e.g., HVAC, eCommerce, space  missions, Soccer, Intelligent Home, e g  HVAC  eCommerce  space  missions  Soccer  Intelligent Home  “talk” monitorThe motivation is coherence and distribution of resources. Multiagent Systems: R. Akerkar 14
  • 15. Applications of Multiagent Systems Electronic commerce: B2B, InfoFlow, eCRM N t Network and system management agents: E.g., The   k  d  t   t  t  E  Th telecommunications companies Real‐time monitoring and control of networks: ATM Real time monitoring and control of networks: ATM Modeling and control of transportation systems: Delivery Information retrieval: online search Automatic meeting scheduling Electronic entertainment: eDog Multiagent Systems: R. Akerkar 15
  • 16. Applications of Multiagent Systems (cont.) Decision and logistic support agents:Military and Utility  Companies Interest matching agents: Commercial sites like Amazon.com User assistance agents: E.g., MS office assistant Organizational structure agents: Supply‐chain ops Industrial manufacturing and production: manufacturing cells Personal agents: emails Investigation of complex social phenomena such as evolution of  roles, norms, and organizational structures Multiagent Systems: R. Akerkar 16
  • 17. Summary of Business Benefits• Modeling existing organizations and dynamics• Modeling and Engineering E societies Modeling and Engineering E‐societies• New tools for distributed knowledge‐ware Multiagent Systems: R. Akerkar 17
  • 18. Three views of Multi‐agencyConstructivist: Agents are rational in the sense of Newell’s principle of individual rationality. They only perform goals which bring them a of individual rationality  They only perform goals which bring them a positive net benefit without regard to other agents. These are self‐interested agents.Sociality: Agents are rational in the Jennings’ principle of social rationality. They perform actions whose joint benefit is greater than its joint loss. These are self‐less, responsible  agents. Reductionist: Agents which accept all goals they are capable of performing. These are benevolent agents.performing  These are benevolent agents Multiagent Systems: R. Akerkar 18
  • 19. Multi‐agency: allied fields DAI MAS: (1) online social laws, (2) agents may adopt goals and adapt beyond any problem laws DPS: offline social laws CPS: (1) agents are a ‘team’, (2) agents ‘know’ the shared goal• In DAI, a problem is being automatically decomposed among distributed nodes, whereas in multi‐agents, each agent chooses to  , g , gwhether to participate.• Distributed planning is distributed and decentralized action selection whereas in multi‐agents, agents keep their own copies a selection whereas in multi agents  agents keep their own copies a plan that might include others. Multiagent Systems: R. Akerkar 19
  • 20. Multi‐agent assumptions and goals• Agents have their own intentions and the system has distributed intentionality  y• Agents model other agents mental states in their own decision making  g• Agent internals are of less central than agents interactions• Agents deliberate over their interactions • Emergence at the agent level and at the interaction level are desirable g p p p p• The goals is to find some principles‐for or principled ways to explore interactions Multiagent Systems: R. Akerkar 20
  • 21. Origins of Multi‐agent systems • Carl Hewitt’s Actor model, 1970 • Blackboard Systems: Hearsay (1975), BB1, GBB  • Distributed Vehicle Monitoring System (DVMT, 1983) • Di t ib t d AI Distributed AI • Distributed OS Multiagent Systems: R. Akerkar 21
  • 22. MAS Orientations Computational Organization Theory Databases Sociology Formal AIEconomics Distributed Problem Solving Cognitive Psychology Science Systems Distributed Theory Computing Multiagent Systems: R. Akerkar 22
  • 23. Multi‐agents in the large versus in the small• In the small: (Distributed AI) A handful of “smart” agents with emergence in the agents t   ith   i  th   t• In the large: 100+ “simple” agents with emergence in the group: Swarms (Bugs) http://www.swarm.org/the group: Swarms (Bugs) http://www swarm org/ Multiagent Systems: R. Akerkar 23
  • 24. Outline1. History and perspectives on multiagents2. Agent Architecture3. Agent Oriented Software Engineering4. Mobility5. Autonomy and Teaming Multiagent Systems: R. Akerkar 24
  • 25. Abstract Architecturestates action action actions Environment Multiagent Systems: R. Akerkar 25
  • 26. Architectures• Deduction/logic-based• Reactive• BDI• Layered (hybrid) Multiagent Systems: R. Akerkar 26
  • 27. Abstract Architectures An abstract model: <States, Action, S*A> An abstract view  S = {s1, s2, …} – environment states { , , }  A = {a1, a2, …} – set of possible actions This allows us to view an agent as a function action : S*  A Multiagent Systems: R. Akerkar 27
  • 28. Logic‐Based Architectures g These agents have internal state See and next functions and model decision making by a set of  g ydeduction rules for inference see : S  P next : D x P  D action : D  A Use logical deduction to try to prove the next action to take Advantages Simple, elegant, logical semantics p , g , g Disadvatages Computational complexity Representing the real world Multiagent Systems: R. Akerkar 28
  • 29. Reactive Architectures Reactive Architectures do not use h d  symbolic world model  symbolic reasoning An example is Rod Brooks’s subsumption architecture Advantages  Simplicity, computationally tractable, robust,  elegance g Disadvantages  Modeling limitations, correctness, realism Multiagent Systems: R. Akerkar 29
  • 30. Reflexive Architectures:  simplest type of reactive  architecture  Reflexive agents decide what to do without  regard to history – regard to history  purely reflexive action : P  A  Example ‐ thermostatction(s) = { off on if temp = OK otherwise Multiagent Systems: R. Akerkar 30
  • 31. Reflex agent without state (Russell and Norvig, 1995) Multiagent Systems: R. Akerkar 31
  • 32. Reflex agent with state (Russell and Norvig, 1995)Norvig, 1995) Multiagent Systems: R. Akerkar 32
  • 33. Goal‐oriented agent:  a more complex reactive agent (Russell and  p g (Norvig, 1995) Multiagent Systems: R. Akerkar 33
  • 34. Utility‐based agent: a complex reactive agent (Russell and Norvig,  p g ( g,1995) Multiagent Systems: R. Akerkar 34
  • 35. BDI: a Formal Method• Belief: states, facts, knowledge, data• Desire: wish, goal, motivation (these might conflict) • Intention: a) select actions  b) performs actions  c)  Intention: a) select actions, b) performs actions, c) explain choices of action (no conflicts)• Commitment  persistence of intentions and trials Commitment: persistence of intentions and trials• Know‐how: having the procedural knowledge for carrying out a task Multiagent Systems: R. Akerkar 35
  • 36. Belief-Desire-Intention Environment belief actsense revision Beliefs generate options filter Desires Intentions Multiagent Systems: R. Akerkar 36
  • 37. Why is BDI a Formal Method?• BDI is typically specified in the language of modal logic with ppossible world semantics.• Possible worlds capture the various ways the world might develop.Since the formalism in [Wooldridge 2000] assumes at least a KDaxiomatization f each of B D and I each of th sets of possible i ti ti for h f B, D, d I, h f the t f iblworlds representing B, D and I must be consistent.• A KD45 logic with the following axioms: • K: BDI(a,  , t)  (BDI(a, , t)  BDI(a, , t))  • D: BDI(a, t)  not BDI(a, not , t)  • 4: B(a, , t)  B( B(a, , t) ) • 5: (not B(a, , t))  B( not B(a, , t))• K&D is the normal modal system Multiagent Systems: R. Akerkar 37
  • 38. A simplified BDI agent algorithm1. B = B0;2.2 I := I0;3. while true do4.4 get next percept ;5. B := brf(B, ); // belief revision6.6 D:=options(B,D,I); D:=options(B D I); // determination of desires7. I := filter(B, D, I); // determination of intentions8.8  := plan(B I); plan(B, // plan generation9. execute 10.10 end while Multiagent Systems: R. Akerkar 38
  • 39. Correspondences• Belief-Goal compatibility: D B l Des Bel• Goal-Intention Compatibility: Int  Des• Volitional Commitment: Int Do  Do• Awareness of Goals and Intentions: Des  BelDes Int  BelInt Multiagent Systems: R. Akerkar 39
  • 40. Layered ArchitecturesLayered Architectures Layering is based on division of behaviors into automatic  and controlled. Layering might be Horizontal (I.e., I/O at each layer) or  Vertical (I.e., I/O is dealt with by single layer) Advantages are that these are popular and fairly intuitive  modeling of behavior Dis‐advantages are that these are too complex and non‐ uniform representations Multiagent Systems: R. Akerkar 40
  • 41. Outline1. History and perspectives on 1. History and perspectives onmultiagents2. Agent Architecture2. Agent Architecture3. Agent Oriented Software Engineering4. Mobility4. Mobility5. Autonomy and Teaming Multiagent Systems: R. Akerkar 41
  • 42. Agent‐Oriented Software  Engineering AOSE is an approach to developing software using  agent‐oriented abstractions that models high level  interactions  and relationships. p Agents are used to model run‐time decisions about  g the nature and scope of interactions that are not  known ahead of time. Multiagent Systems: R. Akerkar 42
  • 43. Designing Agents:Recommendations from H. Van Dyke Parunak’s (1996) “Go to the Ant”: Engineering Principles from Natural Multi-Agent Systems, Annals of Operations Research, special issue on AI and Management Science.1. Agents should correspond to things in the problem domain rather than to  h ld d h h bl d h h abstract functions.2. Agents should be small in mass (a small fraction of the total system), time (able  to forget), scope (avoiding global knowledge and action). g ), p ( gg g )3. The agent community should be decentralized, without a single point of control  or failure.4. Agents should be neither homogeneous nor incompatible, but diverse.  Randomness and repulsion are important tools for establishing and  maintaining this diversity. 5. Agent communities should include a dissipative mechanism to whose flow they  can orient themselves, thus leaking entropy away from the macro level at  which they do useful work. hi h  h  d   f l  k6. Agents should have ways of caching and sharing what they learn about their  environment, whether at the level of the individual, the generational chain, or  y g the overall community organization.7. Agents should plan and execute concurrently rather than sequentially. Multiagent Systems: R. Akerkar 43
  • 44. OrganizationsHuman organizations are several agents, engaged in multiple  g g , g g pgoal‐directed tasks, with distinct knowledge, culture, memories, history, and capabilities, and separate legal  , y, p , p gstanding from that of individual agentsComputational Organization Theory (COT) models information production and manipulation in organizations of human and computational agents Multiagent Systems: R. Akerkar 44
  • 45. Management of Organizational Structure O Organizational constructs are modeled as   i ti l  t t     d l d  entities in multiagent systems Multiagent systems have built in mechanisms  for flexibly forming, maintaining, and  for flexibly forming  maintaining  and  abandoning organizations Multiagent systems can provide a variety of  stable intermediary forms in rapid systems  development Multiagent Systems: R. Akerkar 45
  • 46. 7.2.1 Agent and Agency 7.2.1 Agent and Agency Multiagent Systems: R. Akerkar 46
  • 47. AOSE Considerations What, how many, structure of agent? Model of the environment? Communication? Protocols? Relationships?  Coordination? Multiagent Systems: R. Akerkar 47
  • 48. Stages of Agent‐Oriented Software Engineering A Requirements: provided by user A. B. Analysis: objectives and invariants B A l i   bj ti   d i i t C. Design: Agents and Interactions D. Implementation: Tools and techniques Multiagent Systems: R. Akerkar 48
  • 49. KoAS‐ Bradshaw, et alKnowledge (Facts) represent Beliefs in which the agent hasconfidence aboutF t and Beliefs may b h ld privately or b shared. Facts d B li f be held i t l be h dDesires represent goals and preferences that motivate the agent toactIntentions represent a commitment to perform an action.There is no exact description of capabilitiesLife cycle: birth, life and death (also a Cryogenic state) birth life,Agent Types: KaOS, Mediation (KaOS and outside) , Proxy(mediator between two KAOS agents), Domain Manager (agentregistration),registration) and Matchmaker (mediator of services)Omitted: Emotions, Learning, agent relationships, Fraud, Trust,Security Multiagent Systems: R. Akerkar 49
  • 50. Gaia‐ Wooldridge, et al g ,The Analysis phase: Roles model: -PPermissions ( i i (resources)) - Responsibilities (Safety properties and Liveliness properties) -P t Protocols l Interactions model: purpose, initiator, responder, inputs, outputs, and processing of the conversationThe D iTh Design phase: h Agent model Services model Acquaintance modelOmitted: Trust Fraud Commitment and Security Trust, Fraud, Commitment, Security. Multiagent Systems: R. Akerkar 50
  • 51. TAEMS: Keith Decker and Victor Lesser The agents are simple processors. Internal structure of agents include (a) beliefs((knowledge) about task structure, (b) states, (c) actions, g ) ,( ) ,( ) ,(d) a strategy which is constantly being updated, of whatmethods the agent intends to execute at what time. Omitted: Roles, Skills or Resources. Multiagent Systems: R. Akerkar 51
  • 52. BDI based Agent-Oriented Methodology(KGR) Kinny Georgeff and Rao Kinny,  External viewpoint: the social system structure and dynamics.  Agent Model + Interaction Model. g  Independent of agent cognitive model and communication  Internal viewpoint: the Belief Model the Goal Model, Model, and the Plan Model.  Beliefs: the environment, internal state, the actions , , repertoire  Goals: possible goals, desired events  Plans: state charts Multiagent Systems: R. Akerkar 52
  • 53. MaSE – Multi-agent Systems Engineering, DeLoach Domain Level Design (Use AgML for Agent typeDiagram,Diagram Communication Hierarchy Diagram and Diagram,Communication class Diagrams.) Agent Level Design (Use AgDL for agentconversation) Component Design AgDL System Design AgML y g g Languages: AgML (Agent Modeling Language- a graphical language) AgDL (Agent Definition Language- the system level behavior and the internal behavior of the agent) Rich in communication, poor in social structures communication Multiagent Systems: R. Akerkar 53
  • 54. Scott DeLoach’s MaSE Sequence Roles Tasks Diagrams Agent Class Conversation Diagram Diagram Internal Agent Diagram g Deployment Diagram Multiagent Systems: R. Akerkar 54
  • 55. The TOVE Project (1998) ; Mark Fox, et al. • Organizational hierarchy: Divisions and sub-divisions • Goals, sub-goals, their hierarchy (using AND & OR) • Roles, their relations to skills, goals, authority, processes, policies • Skills, and their link to roles • Agents, their affiliation with teams and divisions Commitment, Empowerment • Communication links between agents: sending and receiving information. information Communication at three levels: information, intentions (ask, tell, deny…), and conventions (semantics). Levels 2 & 3 are designed using speech act. • Teams as temporary group of agents • Activities and their states, the connection to resources and the constraints. • Resources and their relation to activities and activities states • Constraints on activities (what activities can occur at a specific situation and a specific time) • Time and the duration of activities. Actions occur at a point in time and they have duration. • Situation Shortcomings: central d i i making Sh t i t l decision ki Multiagent Systems: R. Akerkar 55
  • 56. Agent-Oriented Programming (AOP): Yoav Shoham• AGENT0 is the first AOP and the logical component of thislanguage is a quantified multi-modal logic.• M t l state: beliefs, capabilities, and commitments ( Mental t t b li f biliti d it t (orobligations).• Communication: ‘request’ (to perform an action), ‘unrequest’(to refrain from action), and ‘inform’ (to pass information). Multiagent Systems: R. Akerkar 56
  • 57. The MADKIT Agent Platform Architecture: Olivier Gutknecht Jacques FerberOlivier Gutknecht Jacques Ferber Three core concepts : agent, group, and role. Interaction language Organizations: a set of groups Multiagent Systems: R. Akerkar 57
  • 58. Outline1. History and perspectives on 1 History and perspectives onmultiagents2. Agent Architecture hi3. Agent Oriented Software Engineering4. Mobility4. Mobility5. Autonomy and Teaming Multiagent Systems: R. Akerkar 58
  • 59. Mobile Agents g[Singh, 1999] A computation that can change its location of execution (given a suitable underlying execution environment), both code d program state  [Papaioannou, 1999] A software agent that is able to migrate from one host to  [P i   ] A  f     h  i   bl     i  f    h     another in a computer network is a mobile agent. [IBM] Mobile network agents are programs that can be dispatched from one  computer and transported to a remote computer for execution. Arriving at the  remote computer, they present their credentials and obtain access to local  p y y g g services and data. The remote computer may also serve as a broker by bringing  together agents with similar interests and compatible goals, thus providing a  meeting place at which agents can interact. Multiagent Systems: R. Akerkar 59
  • 60. Mobile Agent Origins‐ Batch Jobs‐ Distributed Operating System (migration is  transparent to the user.)‐ Telescript [General Magic, Inc. USA, 1994]  migration of an executing program for  use of local resources Multiagent Systems: R. Akerkar 60
  • 61. A paradigm shift: Distributed Systems versus mobile codeInstead of masking the physical location of a component, mobile code infrastructures make it evident.Code mobility is geared for Internet‐scale systems ... unreliableProgramming is location aware ...location is available to the programmer g gMobility is a choice ...migration is controlled by the programmer or at runtime by theagentLoad balancing is not the driving force ...instead flexibility, autonomy and disconnected operations are key factors Multiagent Systems: R. Akerkar 61
  • 62. A paradigm comparison:  2 Components, 2 Hosts, a Logic, a Resource, Messages, a Task 2 Components 2 Hosts a Logic a Resource Messages a TaskRemote ComputationIn remote computation, components in the system are static, In remote computation  components in the system are static  whereas logic can be mobile. For example, component A, at Host HA, contains the required logic L to perform a particular task T, but does not have access to the required resources R to complete the  q ptask. R can be found at HB, so A forwards the logic to component B,  k b f d f d h lwhich also resides at HB. B then executes the logic before returning the result to A. E.g., batch entries. HA HB L, T R HA L HB Compute L R result Multiagent Systems: R. Akerkar 62
  • 63. A paradigm comparison:  2 Components, 2 Hosts, a Logic, a Resource, Messages, a Task 2 Components 2 Hosts a Logic a Resource Messages a TaskCode on DemandIn Code on Demand, component A already has access to resource R. However, A (or any other components at Host A) has no idea of the logic required to perform task T. Thus, A sends a request to B for it to forward the logic L. Upon receipt, A is then able to perform T. An example of this abstraction is a Java applet, in which a piece of code example of this abstraction is a Java applet  in which a piece of code is downloaded from a web server by a web browser and then executed. HA HB R L HA Send L HB Compute R L L Multiagent Systems: R. Akerkar 63
  • 64. A paradigm comparison: 2 Components, 2 Hosts, a Logic, a Resource, Messages, a Task2 Components 2 Hosts a Logic a Resource Messages a TaskMobile AgentsWith the mobile agent paradigm, component A already has the logic L required to perform task T, but again does not have access to resource R. This resource can  t   f  t k T  b t  i  d   t h    t    R  Thi    be found at HB. This time however, instead of forwarding/requesting L to/from another component, component A itself is able to migrate to the new host and interact locally with R to perform T. This method is quite different to the previous two examples, in this instance an entire component is migrating, along with its two examples  in this instance an entire component is migrating  along with its associated data and logic. This is potentially the most interesting example of all the mobile code abstractions. There are currently no contemporary examples of this approach, but we examine its capabilities in the next section. HA HB L R HA A moves HB Compute L R A returns Multiagent Systems: R. Akerkar 64
  • 65. A paradigm comparison:  2 Components, 2 Hosts, a Logic, a Resource, Messages, a Task 2 Components 2 Hosts a Logic a Resource Messages a TaskClient/ServerClient/Server is a well known architectural abstraction that has been employed since the first computers began to communicate. In this example, B has the logic L to carry out Task T, and has access to resource R. Component A has none of these, and is unable to transport itself. Therefore, for A to obtain the result of T, it must t t it lf  Th f  f  A t   bt i  th   lt  f T  it  t resort to sending a request to B, prompting B to carry out Task T. The result is then communicated back to A when completed. HA HB L, R HA request HB L, R Compute result Multiagent Systems: R. Akerkar 65
  • 66. Problems in distributed Systems: J. WaldoLatency: Most obvious, Least worrisome yMemory: Access, Unable to use pointers, Because memory is both local and remote, call types have to differ, No possibility of shared memoryPartial Failure: Is a defining problem of distributed computing, Not possible in local computing, Concurrency: Adds significant overhead to programming model, No programmer control of method invocation orderwe should treat local and remote objects differently. Waldo, J., Wyant, G., Wollrath, A., Kendall, S., “A note on distributed computing”, Sun Microsystems Technical Report SML 94‐29, 1994. i ”  S  Mi  T h i l R  SML    Multiagent Systems: R. Akerkar 66
  • 67. Mobile Agent Toolkit from IBM: Basic concepts Aglet. An aglet is a mobile Java object that visits aglet‐enabled hosts in a computer  network. It is autonomous, since it runs in its own thread of execution after arriving at  , , y p g a host, and reactive, because of its ability to respond to incoming messages. g Proxy. A proxy is a representative of an aglet. It serves as a shield for the aglet that  protects the aglet from direct access to its public methods. The proxy also provides  location transparency for the aglet; that is, it can hide the aglet’s real location of the  g aglet. Context. A context is an aglets workplace. It is a stationary object that provides a  means for maintaining and managing running aglets in a uniform execution  environment where the host system is secured against malicious aglets. One node in a  computer network may run multiple servers and each server may host multiple  contexts. Contexts are named and can thus be located by the combination of their    C     d  d    h  b  l d b   h   bi i   f  h i servers address and their name. Message. A message is an object exchanged between aglets. It allows for  synchronous as well as asynchronous message passing between aglets. Message  passing can be used by aglets to collaborate and exchange information in a loosely   i    b   d b   l t  t   ll b t   d  h  i f ti  i    l l coupled fashion. Future reply. A future reply is used in asynchronous message‐sending as a handler to  receive a result later asynchronously. Identifier. An identifier is bound to each aglet. This identifier is globally unique and  immutable throughout the lifetime of the aglet. Multiagent Systems: R. Akerkar 67
  • 68. Mobile Agent Toolkit from IBM: Basic operationsCreation. The creation of an aglet takes place in a context. The new aglet is assigned an identifier, inserted into the context, and initialized. The aglet starts executing as soon as it has been successfully initialized.    it h  b   f ll  i iti li dCloning. The cloning of an aglet produces an almost identical copy of the original aglet in the same context. The only differences are the assigned identifier and the fact that execution restarts in the new aglet. Note that execution threads are not cloned.Dispatching. Dispatching an aglet from one context to another will remove it from its current context and insert it into the destination context, where it will restart execution (execution threads do not migrate). We say that the aglet has been “pushed” to its new context.Retraction. The retraction of an aglet will pull (remove) it from its current context and insert it into the context from which the retraction was requested.Activation and deactivation. The deactivation of an aglet is the ability to temporarily halt its execution and store its state in secondary storage. Activation of an aglet will restore it in a context.  i  i    Disposal. The disposal of an aglet will halt its current execution and remove it from its current context.Messaging. Messaging between aglets involves sending, receiving, and handling messages synchronously as well as asynchronously. Multiagent Systems: R. Akerkar 68
  • 69. Outline1. History and perspectives on 1 History and perspectives onmultiagents2. Agent Architecture hi3. Agent Oriented Software Engineering4. Mobility4. Mobility5. Autonomy and Teaming Multiagent Systems: R. Akerkar 69
  • 70. Autonomy•Target and Context: Autonomy is only meaningful in terms ofspecific targets and within given contexts.•Capability: Autonomy only makes sense if an agent has a capability oward a target. E.g, a rock is not autonomous•Sources of Autonomy: Endogenous: Self liberty, Desire, Experience, Motivations Exogenous: Social, Deontic liberty, Environments•Implementations: Off-line and by design, Online with fixed costanalysis,anal sis Online learning Multiagent Systems: R. Akerkar 70
  • 71. Perspectives on Autonomy CommunicationCognitive Science and AI Organizational Science Software Engineering g g Multiagent Systems: R. Akerkar 71
  • 72. Autonomy and CommunicationDetection and expression of autonomies requires sharing understanding of social roles and personal relationships among the participating agents, e.g., agents with positive relationships will would change their autonomies to accommodate one anotherThe form of the directive holds clues for autonomy, e.g., specificity in “Do x with a wrench and slowly.”The content of the directive and the responses to it contribute to the autonomy, e.g., “Do x soon.”An agent’s internal mechanism for autonomy determination  , p , y ,affects the detection, expression, and harmony of autonomies, e.g., an agent’s moods, drives, temperaments, … Multiagent Systems: R. Akerkar 72
  • 73. Situated Autonomy and Action Selectionenablers sensory communications data beliefs situated autonomy communicationphysical goal goal physical act communication i ti intention intention Multiagent Systems: R. Akerkar 73
  • 74. Shared Autonomy between an Air Traffic Control assistant agent and the human operator- 1999 g p Multiagent Systems: R. Akerkar 74
  • 75. Autonomy ComputationCollision:Autonomy = (CollisionPriority / 4.0) + 4 0)(((|CollisionPriority – 4.0|) * t) / T)Landing:If 3.0 < LandingPriority <= 4 0: 3 0 <= < 4.0:Autonomy = 1.0If LandingPriority < 3.0:Autonomy = (LandingPriority/4.0) +(((|LandingPriority – 4.0|) * t) / 2) Multiagent Systems: R. Akerkar 75
  • 76. Team- Building Intuition•Drivers on the road are generally not a team•Race driving in a “draft” is a team•11 soccer players declaring to be a team are ateam•Herding sheep is generally a teamAgents change their autonomy, roles, coordination strategies•A String Quartet is a teamWell organized and practiced Multiagent Systems: R. Akerkar 76
  • 77. Team- Phil Cohen, et alPhil Cohen, et al:Shared goal and shared mental statesCommunication in the form of Speech Acts is required for team formation p qSteps to become a team:1.1 Weak Achievement Goal (WAG) relative to q and with respect to a teamto bring about p if either of these conditions holds:•The agent has a normal achievement goal to bring about p; that is, the agentdoesnot yet believe that p is true and has p eventually being true as a goal.•The agent believes that p is true, will never be true, or is irrelevant (that is, q isfalse), but has as a goal that the status of p be mutually believed by all the teammembers.2. Joint Persistent Goal (or JPG) relative to q to achieve p just in case1. They mutually believe that p is currently false;2. They mutually know they all want p to eventually be true; y y y y3. It is true (and mutual knowledge) that until they come to mutually believe eitherthatp is true, that p will never be true, or that q isSystems: R. Akerkar will continue to mutually Multiagent false, they 77
  • 78. Team- Phil Cohen, et al•Requiring Speech Act Communication is too strong•Requiring Mutual Knowledge is too strong•Requiring agents to remain in a team until everyone knowsabout the team qualifying condition is too strong team-qualifying Multiagent Systems: R. Akerkar 78
  • 79. Team- Michael WooldridgeWith respect to agent i’s desires there is potential forcooperation iff:1. th i1 there is some group g such th t i b li h that believes that g can j i tl th t jointlyachieve ; and either2. i can’t achieve  in isolation; or3. i believes that for every action  that it can perform thatachieves , it has a desire of not performing .i performs speech act FormTeam to form a team iff:1. i informs team g that the team J-can ; and2 i requests team g t perform 2. t t to fTeam g is a PreTeam iff:1. g mutually believe that it J-can 2. g mutually intends  Multiagent Systems: R. Akerkar 79
  • 80. Team- Michael Wooldridge•Onset of cooperative attitude is independent of knowing aboutspecific individuals•Assuming agent knows about g is hard too simplistic Assuming•Requiring Speech Act Communication is too strong•Requiring Mutual Knowledge is too strong Multiagent Systems: R. Akerkar 80
  • 81. Team- Munindar Singh g<agents, social commitments, coordination relationships>Social commitments: <debtor, creditor, context, dischargecondition>Operators: Create, Discharge, Cancel, Release, Delegate, AssignCoordination relationships about events:e is required by fe disables fe feeds or enables fe conditionally feeds f… Multiagent Systems: R. Akerkar 81
  • 82. Agent as a member of a group... g g p agent honors handles roles obligations partakes specifies goals plans member of institution norms s a s shares relies on partakes inherits set/values borrow contains(terminalgoals) organization group partakes Multiagent Systems: R. Akerkar 82
  • 83. The big picture Norms Values Obligationsab (i.e., responsibility) consent perfect f t agreement AutonomybDependenceba Autonomyb + Autonomya Delegationba D l ti coordnation weak agreement coordnation Controlab Trustba definciency d fi i Powerab Multiagent Systems: R. Akerkar 83
  • 84. Concluding RemarksConcluding Remarks Th     There are many uses for    f  Agents   Agent‐based Systems  Agent Frameworks Many open problems area available  Theoretical issues for modeling social elements  such as autonomy, power, trust, dependency,  norms, preference, responsibilities, security, …   f   ibili i   i    Adaptation and learning issues  Communication and conversation issues Multiagent Systems: R. Akerkar 84