Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

MSc Presentation

112 views

Published on

  • Be the first to comment

  • Be the first to like this

MSc Presentation

  1. 1. Personality in Argumentative Agents Marlon Etheredge 7 November 2016 Utrecht University
  2. 2. Table of Contents Introduction Argumentation Personality Personality Model Reasoning Software Implementation Opponent Modelling Conclusion 1
  3. 3. Introduction
  4. 4. Research Topic • Software agents in argumentation dialogues • Agents try to settle their differences, which has many applications including (explaining) decision making [Zho+14], training humans [Van11] and resolving conflicts in firewalls [App+12] • Argumentation can be in the form of deliberation, negotiation and persuasion among others • Increasing amount of applications requires adjustability of the agent according to the application context • Introducing personality in argumentative agents 2
  5. 5. Argumentation Dialogues • This research focuses on persuasion dialogues • Agents need to settle on a conflicting point of view, called the topic • As an example, two agents Paul and Otto, can settle their differences regarding a topic: Science endangers humanity • Paul, the proponent of the dialogue, starts by claiming the topic of the dialogue 3
  6. 6. Persuasion Peter: Science endangers humanity. (Making a claim) 4
  7. 7. Persuasion Peter: Science endangers humanity. (Making a claim) Otto: Why do you think that science endangers humanity? (Asking for support for the claim) 4
  8. 8. Persuasion Peter: Science endangers humanity. (Making a claim) Otto: Why do you think that science endangers humanity? (Asking for support for the claim) Peter: Since science brings about many new technologies that could potentially harm human-beings. (Providing support for the claim) 4
  9. 9. Persuasion Peter: Science endangers humanity. (Making a claim) Otto: Why do you think that science endangers humanity? (Asking for support for the claim) Peter: Since science brings about many new technologies that could potentially harm human-beings. (Providing support for the claim) Otto: I agree with you that science brings about new technologies. (Conceding the provided support for the claim) But I disagree that this poses a threat to humanity, since science primarily introduces new technologies that improve the lives of human-beings. (Providing a counter argument) 4
  10. 10. Peter: Why do you think that these technologies improve the lives of human-beings? (Asking for support for the counter argument) 5
  11. 11. Peter: Why do you think that these technologies improve the lives of human-beings? (Asking for support for the counter argument) Otto: Since these technologies provide for a method of helping humans in situations where they would have been helpless otherwise. In addition, improving the lives of human-beings does not endanger humanity. (Providing support for the counter argument) 5
  12. 12. Peter: Why do you think that these technologies improve the lives of human-beings? (Asking for support for the counter argument) Otto: Since these technologies provide for a method of helping humans in situations where they would have been helpless otherwise. In addition, improving the lives of human-beings does not endanger humanity. (Providing support for the counter argument) Peter: OK, I agree that science introduces new technologies that improve the lives of human-beings. Moreover, I agree that improving the lives of human-beings does not endanger humanity. (Conceding a claim) 5
  13. 13. Formalization • Using Prakken’s dialogue framework [Pra05] • Using ASPIC+ [Pra10] as a logic for defeasible argumentation • Different attitudes of argumentative agents are built on top of Parsons et al. [PWA03] 6
  14. 14. Research Direction • Previously, reasoning of argumentative agents was primarily based on game-theoretic approaches • Personality in the reasoning process of argumentative agents has not been studied frequently • Studying the modelling of personalities of opponents of these argumentative agents 7
  15. 15. Research Topic • Personality in agents increases the adjustability of the agent’s reasoning process, according to the context of the application • By modelling the personality of the opponent, the agent can include knowledge of the opponent’s personality and thus its behavior in its reasoning process • It is expected that personality in agents contributes to the compatibility of human- and artificial intelligence- tasks 8
  16. 16. Research Topic • Based on the configuration of the personality, the agent can behave differently • In addition, the agent can prefer or disprefer using certain utterances or only use them under certain conditions Quick decisions The agent could be configured to easily accept claims by its opponent, reaching decisions quicker Absolute truth The agent could be configured to only accept arguments when the opponent uses irrefutable argumentation in scenarios where the correctness of the outcome of a dialogue is critical 9
  17. 17. Research questions 1. How can personality be introduced to argumentative agents for persuasion dialogues? 10
  18. 18. Research questions 1. How can personality be introduced to argumentative agents for persuasion dialogues? 2. How can a model for personality in argumentative agents be devised that allows argumentative agents to reason according to a personality configuration? 10
  19. 19. Research questions 1. How can personality be introduced to argumentative agents for persuasion dialogues? 2. How can a model for personality in argumentative agents be devised that allows argumentative agents to reason according to a personality configuration? 3. How can an argumentative agent featuring personality be implemented? 10
  20. 20. Research questions 1. How can personality be introduced to argumentative agents for persuasion dialogues? 2. How can a model for personality in argumentative agents be devised that allows argumentative agents to reason according to a personality configuration? 3. How can an argumentative agent featuring personality be implemented? 4. How can an argumentative agent featuring personality model the personality of its opponent? 10
  21. 21. Approach • Theoretical model of personality in argumentative agents • Implementation of an argumentative agent featuring personality based on Erik Kok’s BAIDD framework [Kok13] • Method for modelling the personality of the opponent 11
  22. 22. Argumentation
  23. 23. Dung’s Abstract Framework Science endangers humanity. Science does not endanger human- ity. New technologies harm humans. New technologies improve lives of humans, improv- ing lives does not harm humans. • Dung describes a formalism of argumentation [Dun95] allowing for the description of arguments and attack: • A set A of arguments • A binary relation on arguments Def , called attack • Allowing for different notions of acceptability describing the status of arguments • Used to draw conclusions based on AF = (A, Def ) 12
  24. 24. Dung’s Abstract Framework A B C D • Dung describes a formalism of argumentation [Dun95] allowing for the description of arguments and attack: • A set A of arguments • A binary relation on arguments Def , called attack • Allowing for different notions of acceptability describing the status of arguments • Used to draw conclusions based on AF = (A, Def ) 12
  25. 25. ASPIC+ • Argumentation system, instantiation of Dung’s system • Sound (deductive) and unsound (defeasible) reasoning • Contrariness function representing contrary relationships like ’harms humans’ and ’saves lives’ • Specification of preference orderings over defeasible inference rules, arguments and a knowledge base 13
  26. 26. ASPIC+ • Arguments are constructed based on a knowledge base in an argumentation system (K, ≤ ) • Strict rules of the form ϕ1, . . . , ϕn → ϕ • Defeasible rules of the form ϕ1, . . . , ϕn ⇒ ϕ • E.g. A = ϕ, A is an argument using no rules of inference, • for rs = ϕ → ψ, B = ϕ, ϕ ⊃ ψ → ψ, B is an argument using a strict inference rule rs from which it without exception follows that ψ, • for rd = ϕ ⇒ γ, C = ϕ, ϕ ⊃ γ ⇒ γ, C is an argument using a defeasible inference rule rd from which it presumably follows that γ 14
  27. 27. ASPIC+ • Three types of attack: • An argument A undercuts an argument B if A attacks an inference rule of B • An argument A rebuts an argument B if A attacks the conclusion of B • An argument A undermines an argument B if A attacks a premise of B • Status of arguments: • An argument is justified if it can make the opponent run out of replies • An argument is overruled if it is not justified and defeated by a justified argument • An argument is defensible if it is not justified but none of its defeaters is justified 15
  28. 28. Prakken’s Abstract Framework • Has a dialogue goal and at least two participants • Communication language Lc, defining the available speech acts • Protocol Pr, governing the allows speech acts throughout the dialogue • Commitments, propositions the participant is expected to defend publicly • Effect rules C of speech acts in Lc, describing the effects of speech acts on the commitments of the participants • Outcome, turntaking- and termination rules 16
  29. 29. Liberal Dialogue Systems • Framework specialized for persuasion dialogues • Specifies a set of protocol rules for persuasion • Defines a set of speech acts and corresponding effects rules for persuasion • Specifies corresponding turntaking-, outcome- and termination rules 17
  30. 30. Speech Acts for Persuasion Acts Attacks Surrenders claim ϕ why ϕ concede ϕ why ϕ argue A retract ϕ argue A why ϕ(ϕ ∈ prem(A) argue B(B defeats A) concede ϕ (ϕ ∈ prem(A) or ϕ = conc(A)) concede ϕ retract ϕ 18
  31. 31. Effect Rules for Persuasion • A participant that claim(ϕ) or concede(ϕ) commits himself to ϕ • A participant that retract(ϕ) uncommits himself to ϕ • A participant that argue A commits himself to the premises of A and the conclusion of A 19
  32. 32. Example Persuasion Dialogue P1: claim endangersHumanity O2: why endangersHumanity P3: endangersHumanity since harmHumans O4: why harmHumans P5: harmHumans since newTechnologies O6: concede newTechnologies O13: ¬harmHumans since helpingHumans O7: claim ¬endangersHumanity P8: why ¬endangersHumanity O9: ¬endangersHumanity since helpingHumans P10: why helpingHumans O11: helpingHumans since newTechnologies P12: concede helpingHumans O14: retract endangersHumanity 20
  33. 33. Personality
  34. 34. Personality Theories • Different stances in how ’personality’ should be investigated including biological, evolutionary and behaviorist stances • Personality traits subdivides the concept in measurable patterns of humans’ behavior • No consensus on the amount of traits • Different taxonomies exist varying in the number of personality traits and the description of them • Five-factor model [JS99; MC08; Wig96], Eysenck Personality Inventory [EE65], Myers-Briggs Type Indicator [MM10], HEXACO [Ash+04] 21
  35. 35. FFM (OCEAN Model) • Our agent’s personality model is based on the Five-factor model (FFM) • Describes the concept of personality in terms of five personality traits (often referred to by the acronym OCEAN): • Openness to Experience: Active seeking and appreciation of experiences for their own sake • Conscientiousness: Degree of organization, persistence, control and motivation • Extraversion: Quantity and intensity of energy directed outwards into the social world • Agreeableness: Kinds of interactions an individual prefers from compassion to tough mindedness • Neuroticism: Psychological distress 22
  36. 36. FFM (OCEAN Model) Each personality trait is subdivided in six personality facets O C E A N Fantasy Competence Warmth Trust Anxiety Aesthetics Order Gregariousness Straightforwardness Angry Hostility Feelings Dutifulness Assertiveness Altruism Depression Actions Achievement Striving Activity Compliance Self-Consciousness Ideas Self-Discipline Excitement Seeking Modesty Impulsiveness Values Deliberation Positive Emotions Tender-Mindedness Vulnerability 23
  37. 37. Personality Model
  38. 38. Personality Model • Model describing the personality of the argumentative agent • Divided in three components: Personality Vector Associates for each personality facet from the FFM a strength that indicates the strength of the personality facet in the agent’s personality Attitude Acts as a condition that must be met before the agent is allowed to act in an argumentative dialogue Reasoning System Defines, based on the personality of the agent, the preference of the agent in terms of speech acts and attitudes 24
  39. 39. Action Selection vs Revision The personality model makes a distinction between two types of personality facets and corresponding reasoning: Action Selection • Preference ordering over speech act types • E.g. an agent prefers claiming over conceding Action Revision • Preference ordering over attitudes • E.g. an agent prefers to be faithful over rigid when conceding 25
  40. 40. Action Selection vs Revision Action Selection • Self-consciousness • Assertiveness • Actions • Ideas • Values • Competence Action Revision • Achievement Striving • Self-discipline • Deliberation • Activity • Trust • Straightforwardness • Modesty • Anxiety • Angry Hostility • Depression 26
  41. 41. Personality Theory Some of the personality facets in the FFM are non-beneficial to the personality model of the argumentative agent: O C E A N Fantasy Competence Warmth Trust Anxiety Aesthetics Order Gregariousness Straightforwardness Angry Hostility Feelings Dutifulness Assertiveness Altruism Depression Actions Achievement Striving Activity Compliance Self-Consciousness Ideas Self-Discipline Excitement Seeking Modesty Impulsiveness Values Deliberation Positive Emotions Tender-Mindedness Vulnerability 27
  42. 42. Personality Theory • Each personality trait and corresponding personality facet has a description in the personality theory • IPIP [Gol99] contains a comprehensive description of these based on the NEO PI-R personality inventory [CM92] • Facets included in our personality model are interpretations of the descriptions in IPIP tailored to argumentative agents 28
  43. 43. Personality Facets O C E A N Actions Competence Assertiveness Trust Anxiety Ideas Achievement Striving Activity Straightforwardness Angry Hostility Values Self-Discipline Modesty Depression Deliberation Self-Consciousness Self-consciousness: ”Tendency to be shy or anxious”, not preferring claim and argue, preferring to concede 29
  44. 44. Personality Facets O C E A N Actions Competence Assertiveness Trust Anxiety Ideas Achievement Striving Activity Straightforwardness Angry Hostility Values Self-Discipline Modesty Depression Deliberation Self-Consciousness Assertiveness: ”Social ascendancy and forcefulness of expression”, preferring claim 29
  45. 45. Personality Facets O C E A N Actions Competence Assertiveness Trust Anxiety Ideas Achievement Striving Activity Straightforwardness Angry Hostility Values Self-Discipline Modesty Depression Deliberation Self-Consciousness Actions: ”Openness to new experiences on a practical level”, preferring concede 29
  46. 46. Personality Facets O C E A N Actions Competence Assertiveness Trust Anxiety Ideas Achievement Striving Activity Straightforwardness Angry Hostility Values Self-Discipline Modesty Depression Deliberation Self-Consciousness Ideas: ”Intellectual curiosity”, preferring challenge 29
  47. 47. Personality Facets O C E A N Actions Competence Assertiveness Trust Anxiety Ideas Achievement Striving Activity Straightforwardness Angry Hostility Values Self-Discipline Modesty Depression Deliberation Self-Consciousness Values: ”Readiness to re-examine own values and those of authority figures”, preferring retract 29
  48. 48. Personality Facets O C E A N Actions Competence Assertiveness Trust Anxiety Ideas Achievement Striving Activity Straightforwardness Angry Hostility Values Self-Discipline Modesty Depression Deliberation Self-Consciousness Competence: ”Belief in own self-efficacy”, disfavor retract and accept, preferring argue 29
  49. 49. Personality Facets O C E A N Actions Competence Assertiveness Trust Anxiety Ideas Achievement Striving Activity Straightforwardness Angry Hostility Values Self-Discipline Modesty Depression Deliberation Self-Consciousness Achievement Striving: ”The need for personal achievement and sense of direction”, preferring to achieve and defend its goal 30
  50. 50. Personality Facets O C E A N Actions Competence Assertiveness Trust Anxiety Ideas Achievement Striving Activity Straightforwardness Angry Hostility Values Self-Discipline Modesty Depression Deliberation Self-Consciousness Self-discipline: ”One’s capacity to begin tasks and follow through to completion despite boredom or distractions”, preferring to add to lines of dispute 30
  51. 51. Personality Facets O C E A N Actions Competence Assertiveness Trust Anxiety Ideas Achievement Striving Activity Straightforwardness Angry Hostility Values Self-Discipline Modesty Depression Deliberation Self-Consciousness Deliberation: ”Tendency to think things through before acting or speaking”, preferring well-motivated moves 30
  52. 52. Personality Facets O C E A N Actions Competence Assertiveness Trust Anxiety Ideas Achievement Striving Activity Straightforwardness Angry Hostility Values Self-Discipline Modesty Depression Deliberation Self-Consciousness Activity: ”Pace of living”, preferring to move moves in a dialogue 30
  53. 53. Personality Facets O C E A N Actions Competence Assertiveness Trust Anxiety Ideas Achievement Striving Activity Straightforwardness Angry Hostility Values Self-Discipline Modesty Depression Deliberation Self-Consciousness Trust: ”Preference in believing others”, preferring attitudes that help the agent concede 30
  54. 54. Personality Facets O C E A N Actions Competence Assertiveness Trust Anxiety Ideas Achievement Striving Activity Straightforwardness Angry Hostility Values Self-Discipline Modesty Depression Deliberation Self-Consciousness Straightforwardness: ”The tendency of a person to be direct and frank in communication with others”, prefer not to be incoherent, irrelevant or verbose 30
  55. 55. Personality Vector • Let AS denote the set of action selection personality facets and AR denote the set of action revision personality facets • Let Strength(f ) → R denote the strength of facet f with f ∈ AS or f ∈ AR • An action selection personality vector PVAS is a vector [Strength(f1), Strength(f2), . . . , Strength(fn)] Such that n = |AS| and f1, f2, . . . fn ∈ AS • An action revision personality vector PVAR is a vector [Strength(f1), Strength(f2), . . . , Strength(fn)] Such that n = |AR| and f1, f2, . . . fn ∈ AR 31
  56. 56. Attitudes • Attitudes specify under what condition an agent is allowed to make a move containing a certain speech act • Attitudes are associated with speech act types present in the framework • If the condition does not pass, the agent cannot make that move • In addition to a preference for certain speech act types, the agent’s preference specifies preferences for certain attitudes through action revision • Attitudes were first introduced by Parsons et al. [PWA03] and extended in this research 32
  57. 57. Attitudes Assertion Attitudes • Confident • Careful • Thoughtful • Spurious • Deceptive • Hesitant 33
  58. 58. Attitudes Acceptance Attitudes • Credulous • Cautious • Skeptical • Faithful • Rigid 34
  59. 59. Attitudes Challenge Attitudes • Judicial • Suspicious • Persistent • Tentative • Indifferent 35
  60. 60. Attitudes Retraction Attitudes • Regretful • Sensible • Retentive • Incongruous • Determined 36
  61. 61. Attitudes Argue Attitudes • Hopeful • Dubious • Thorough • Misleading • Fallacious • Devious 37
  62. 62. Summary • Agent personality model as personality vector and attitudes • Based on the FFM personality theory • Distinction between action selection and action revision • Model is extensible with more facets and attitudes, additional or different descriptions and even different personality theories 38
  63. 63. Reasoning
  64. 64. Reasoning • The agent’s reasoning process is taken care of by the agent’s reasoning system consisting of two components: Reasoning Rules Determine according to the strengths of personality facets an output value used to compute preference orderings Reasoning Algorithm Uses these output values to generate moves that are played by the agent in a dialogue • The reasoning system makes use of a Mamdani Fuzzy Inference System (FIS) [MA75] to compute preference values based on reasoning rules and facets strengths 39
  65. 65. Mamdani Fuzzy Inference System • Mamdani FIS computes output values based on a set of input values, a membership distribution and a set of fuzzy rules • Reasoning rules are implemented as fuzzy rules • Using three fuzzy classes for input values: Low, Med and High and two for output values: Favored and Disfavored • Support for operators like and, or and not respectively implemented as min(x, y), max(x, y) and 1 − x 40
  66. 66. Reasoning Rules if x is med and y is high then z is disfavored if a is low and b is med then z is favored 41
  67. 67. Reasoning Rules • Reasoning rules for action selection and action revision now can be formalized, for example: • if ideas is high then challenge is favored • if deliberation is not high then thoughtful is disfavored • Reasoning rules introduce a syntax for specifying the effects of the strengths of facets in the agent’s personality on its behavior • Reasoning rules for action selection use strengths specified in PVAS and output preference values for speech act types • Reasoning rules for action revision use strengths specified in PVAR and output preference values for attitudes 42
  68. 68. Reasoning Rules (Example) Now, (complex) reasoning rules can be created that power the reasoning system of the agent 43
  69. 69. Reasoning Rules (Example) if actions is high or selfconsciousness is high then acceptance is favored 43
  70. 70. Reasoning Rules (Example) if ideas is high then challenge is favored 43
  71. 71. Reasoning Rules (Example) if assertiveness is high then assertion is favored 43
  72. 72. Reasoning Rules (Example) if achievementstriving is high and selfdiscipline is high and straightforwardness is high and modesty is low and anxiety is low and activity is high and deliberation is high then thoughtful is favored 43
  73. 73. Reasoning Rules (Example) By combining all these reasoning rules, the agent’s behavior is adjustable according to the definition of its personality vectors. The implementation contains seven action selection reasoning rules and 54 action revision reasoning rules. 43
  74. 74. Reasoning Algorithm • Consists of an action selection algorithm: 1. Computes, given the agent’s personality, a preference ordering over speech act types 2. Returns the preference ordering 44
  75. 75. Reasoning Algorithm • And an action revision algorithm: 1. Takes an preference ordering over speech act types 2. Computes for each speech act type the preference ordering over attitudes associated with that speech act type given the agent’s personality 3. Tests whether the speech act type is allowed in a new move 4. If so, the new move is added to a set of moves that is contributed to the dialogue 5. Returns the set of moves 44
  76. 76. Summary • Reasoning rules for the implementation of reasoning according to description of personality • A reasoning algorithm for the computation of preference ordering based on the personality of the agent • Reasoning system of the agent allowing for introduction of personality in an argumentative agent 45
  77. 77. Software Implementation
  78. 78. BAIPD • BAIPD, a modified version of Kok’s testbed BAIDD [Kok13] for experimentation with software agents in deliberation dialogues • BAIPD handles the persuasion dialogue process including protocol rules, turn taking and requesting the agents to make moves • The platform contains an implementation of the argumentative agent with personality making use of the Fuzzylite library [Rad14] for an implementation of a FIS 46
  79. 79. Overview 47
  80. 80. Personality Vector 48
  81. 81. Action Selection 49
  82. 82. Action Revision 50
  83. 83. Dialog 51
  84. 84. Summary • BAIPD, modified version of the BAIDD testbed by Erik Kok, modified for persuasion (liberation dialogues) • Implementation of the personality model • Including reasoning rules and reasoning algorithm 52
  85. 85. Opponent Modelling
  86. 86. Opponent Modelling Need for knowledge of the opponent Suppose an agent prefers a faithful attitude. This agent faces a spurious or deceptive opponent. Even though the agent would typically concede easily, the agent can incorporate knowledge of its opponent. If the agent would have a method of modelling the opponent, the agent would prefer to concede less based on this context. 53
  87. 87. Reasoning Scheme • Reasoning scheme eliminating possible attitudes by the opponent based on modus tollens • Abduction on possible attitudes prunes the set of possible attitudes An agent with a confident attitude can assert any proposition for which he can construct an argument. The agent has asserted a proposition (claim), but cannot construct an argument for the claim. The confident attitude is not a possible attitude for the claim move. 54
  88. 88. Pn: claim endangersHumanity On+1: why endangersHumanity Pn+2: endangersHumanity since harmHumans • A : M≤∞ × {P, O} −→ ℘(AT ), with AT as the set of attitudes • A(d, Pn) = {confident, careful, thoughtful} 55
  89. 89. Summary • Continuously pruning the attitude status based on new moves added to the dialogue • Model the attitude statuses of moves in the dialogue using something like a histogram • Providing the histogram as input to a learning algorithm for optimization of the agent’s strategy according to its personality and its opponent’s personality model 56
  90. 90. Conclusion
  91. 91. Conclusion How a personality can be introduced to argumentative agents for persuasion dialogues (research question 1) and how a model for personality in argumentative agents can be devised that allows argumentative agents to reason according to a personality configuration (research question 2): 57
  92. 92. Conclusion • Using the FFM as a basis for a description of personality • Defining a personality model as (i) a description of the personality of an argumentative agent in 15 personality facets (ii) a personality vector describing the agent’s personality configuration 57
  93. 93. Conclusion How an argumentative agent featuring personality can be implemented (research question 3): 57
  94. 94. Conclusion • Adjustment of Erik Kok’s BAIDD testbed, introducing BAIPD • Introducing the reasoning system (i) to describe reasoning rules that determine the effects on preferences based on the personality configuration (ii) describing a reasoning algorithm for reasoning based on the agent’s personality 57
  95. 95. Conclusion On how an argumentative agent featuring personality can model the personality of its opponent (research question 4): 57
  96. 96. Conclusion • Maintaining an opponent modelling based on new moves and corresponding commitments • Determination of possible attitudes for moves in the dialogue • Abduction to eliminate possible attitudes 57
  97. 97. Future Research • Optimizing strategies of agents • Extension of this research outside the field of argumentation theory • Investigating different theorems like ”a dialogue with two agents that are achievement-striving increase the length of the dialogue” • Extension or adjustment of the personality descriptions in this research 58
  98. 98. Thank you. 59

×