Multiagent learning and argumentation (Br 2008)

756 views

Published on

Multiagent learning and argumentation. Porto Alegre (BR)

Published in: Technology, Education
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
756
On SlideShare
0
From Embeds
0
Number of Embeds
7
Actions
Shares
0
Downloads
15
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Multiagent learning and argumentation (Br 2008)

  1. 1. Argumentation & Learning in Multiagent Social Choice in Multiagent Social Choice <ul><li>Enric Plaza (IIIA-CSIC) </li></ul>
  2. 2. Outline <ul><li>CBR vs. Learning vs. Agents </li></ul><ul><li>Introduction to Social Choice in MAS </li></ul><ul><li>Justified Predictions in MAS </li></ul><ul><li>Arguments and Counterexamples </li></ul><ul><li>Argumentation Process </li></ul><ul><li>Experimental Evaluation </li></ul><ul><li>Conclusions & Future Work </li></ul>
  3. 3. Learning, RL, and Agents Why “always” Reinforcement Learning in MAS? It’s Online Learning!
  4. 4. CBR is Online Learning Argumentation
  5. 5. Committee <ul><li>Input </li></ul><ul><li>Deliberation </li></ul><ul><li>Aggregation </li></ul><ul><li>Output </li></ul>Committee : (1)A group of people officially delegated (elected or appointed) to perform a function, such as investigating, considering, reporting, or acting on a matter. Team : A number of persons associated in some joint action. A group organized to work together. Coalition : A combination or alliance, esp. a temporary one between persons, factions, states, etc. An alliance for combined action, especially a temporary alliance of political parties.
  6. 6. Ensemble Effect
  7. 7. Committee of agents
  8. 8. Deliberation + Voting
  9. 9. Introduction <ul><li>CBR agents </li></ul><ul><ul><li>solve problems and learn from them </li></ul></ul><ul><li>How could CBR agents collaborate? </li></ul><ul><ul><li>solving and/or learning from cases </li></ul></ul><ul><li>Argumentation process </li></ul><ul><ul><li>to mediate agent collaboration </li></ul></ul><ul><ul><li>achieving a joint prediction </li></ul></ul>
  10. 10. Learning vs Cooperating <ul><li>Why to learn in problem solving? </li></ul><ul><ul><li>to improve accuracy, range, etc </li></ul></ul><ul><li>Why to cooperate in problem solving? </li></ul><ul><ul><li>to improve accuracy, range, etc </li></ul></ul><ul><li>Learning-cooperation continuum </li></ul><ul><ul><li>learning agents that cooperate by arguing </li></ul></ul>
  11. 11. CBR cycle
  12. 12. Argumentation in Multi-Agent Learning Multi-Agent Learning <ul><li>An argumentation framework for learning agents </li></ul><ul><ul><li>Justified Predictions as arguments </li></ul></ul><ul><li>Individual policies for agents to </li></ul><ul><ul><li>generate arguments and </li></ul></ul><ul><ul><li>generate counterarguments </li></ul></ul><ul><ul><li>select counterexamples </li></ul></ul>
  13. 13. Justified Prediction Justification : A symbolic description with the information relevant to determine a specific prediction
  14. 14. Justified Prediction <ul><li>A justified prediction is a tuple </li></ul><ul><li>where agent J.A considers J.S to be the correct solution for problem J.P </li></ul><ul><li>and this is justified by a symbolic description J.D such that </li></ul>
  15. 15. Examination (1) <ul><li>Agent A 1 sends a problem to agent A 2 </li></ul><ul><li>Agent A 2 sends a justification with a symbolic description D of why P 1 has solution S2 according to its experience </li></ul>
  16. 16. Justification example The predicted solution is hadromerida because the smooth form of the megascleres of the spiculate skeleton of the sponge is of type tylostyle , the spikulate skeleton of the sponge has not uniform length , and there are no gemmules in the external features of the sponge.
  17. 17. Examination (2) Set of cases subsumed by justification J.D
  18. 18. Examination (2) Set of cases subsumed by justification J.D Counterexamples form the Refutation set
  19. 19. Counterargument generation
  20. 20. <ul><li>Justified Prediction : An argument α endorsing a individual prediction </li></ul><ul><li>Counterargument : An argument β offered in opposition to an argument α </li></ul><ul><li>Counterexample : A case c contradicting an argument α </li></ul>Argument types
  21. 21. Case-based Confidence
  22. 22. Preference on Arguments Confidence on an argument based on cases
  23. 23. Preference on Arguments(2) Preferred Joint Confidence
  24. 24. Relations between arguments Consistent Incomparable Counterargument
  25. 25. Relations between cases and justified predictions Endorsing case Irrelevant case Counterexample
  26. 26. Argument Generation <ul><li>Generation of a Justified Prediction </li></ul><ul><ul><li>LID generates a description α. D subsuming P </li></ul></ul><ul><li>Generation of a Counterargument </li></ul><ul><ul><li>if no Counterargument can be generated then: </li></ul></ul><ul><li>Selection of a Counterexample </li></ul>
  27. 27. Counterargument Generation <ul><li>Counterarguments are generated based on the specificity criterion </li></ul><ul><li>LID generates a description β. D subsuming P and subsumed by α. D </li></ul>
  28. 28. Selection of a Counterexample <ul><li>Select a case c subsumed by α. D and endorsing a different solution class. </li></ul>
  29. 29. AMAL protocol Justified prediction asserted in the next round Assertions of n agents at round t Agent states a counterargument ß Set of contradicting arguments for agent A i at round t (those predicting a different solution)
  30. 31. Argument Generation
  31. 32. Confidence-weighted Voting <ul><li>Each argument in H t is a vote for an alternative </li></ul><ul><li>Each vote is weighted by the joint confidence measure of that argument </li></ul><ul><li>Weighted voting: alternative with higher aggregated confidence wins </li></ul>
  32. 33. Protocol Performatives <ul><li>assert (α): an agent announces α as its current prediction. </li></ul><ul><li>rebut (α,β): a counterargument β is sent against the argument α. </li></ul><ul><li>withdraw (α): the prediction α is withdrawn. </li></ul>
  33. 35. Experiments <ul><li>Designed to validate 2 hypotheses </li></ul><ul><ul><li>average of 5 10-fold cross validation runs </li></ul></ul><ul><li>(H1) that argumentation is a useful framework for joint deliberation and can improve over other typical methods such as voting; and </li></ul><ul><li>(H2) that learning from communication improves the individual performance of a learning agent participating in an argumentation process. </li></ul>
  34. 36. old Accuracy after Deliberation Accuracy after Deliberation #Agents Accuracy Sponges Data set
  35. 37. #Agents Accuracy Sponges Data set Accuracy after Deliberation
  36. 38. old Accuracy after Deliberation #Agents Accuracy Soybean Data set
  37. 39. Accuracy after Deliberation #Agents Accuracy Soybean Data set
  38. 40. Individual Learning from Communication %cases Accuracy Sponges Data set
  39. 41. Individual Learning from Communication %cases Accuracy Soybean Data set
  40. 42. Case Base Size #cases 23.4% 20% 31.58% 20.02%
  41. 43. #cases 23.4% 20% 31.58% 20.02%
  42. 44. Learning from a few good cases while arguing %cases Accuracy Soybean Data set
  43. 45. Conclusions <ul><li>An argumentation framework for learning agents </li></ul><ul><li>a case-based preference relation over arguments, </li></ul><ul><ul><li>by computing a confidence estimation of arguments </li></ul></ul><ul><li>a case-based policy to generate counter-arguments and select counterexamples </li></ul><ul><li>an argumentation-based approach for learning from communication </li></ul>
  44. 46. Conclusions <ul><li>Two different preference relations on arguments (individual preference vs. joint preference) </li></ul><ul><ul><ul><li>confidence is jointly evaluated by both agents; </li></ul></ul></ul><ul><ul><ul><ul><li>used for incomparable arguments ( costly ) </li></ul></ul></ul></ul><ul><ul><ul><ul><li>case-based estimation </li></ul></ul></ul></ul><ul><ul><ul><li>specificity can be evaluated individually; </li></ul></ul></ul><ul><ul><ul><ul><li>used to generate counterarguments ( cheaper ) </li></ul></ul></ul></ul>
  45. 47. Future Work <ul><li>Framework for agents using induction </li></ul><ul><ul><li>better policies for generating CA </li></ul></ul><ul><ul><li>collaborative search in generalization space </li></ul></ul><ul><li>Deliberative Agreement </li></ul><ul><ul><li>for social choice and collective judgment models </li></ul></ul><ul><ul><li>aggregation procedures of sets of interconnected judgments </li></ul></ul><ul><ul><li>deliberation over sets of interconnected judgments </li></ul></ul>
  46. 48. AMAL vs Centralized #Agents Accuracy Sponges Data set

×