Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
argumentation in artificial intelligence
From Theory to Practice
Federico Cerutti†
and Mauro Vallati‡
xxi • viii • mmxvii
...
2
P. Baroni T. Bench-Capon P. Dunne
M. Giacomin A. Hunter T. Norman
C. Reed A. Toniolo S. Woltran
3
why bother?
Does MMR vaccination cause autism?
5
EARLY REPORT
Early report
lleal-lymphoid-nodular hyperplasia, non-specific colitis, and
pervasive developmental disorder i...
Support
What else should
be true if the
causal link is true?
Alternative explanation
7
From Ileal-lymphoid-nodular hyperpl...
Douglas Walton
Chris Reed
Fabrizio Macagno
ARGUMENTATION
SCHEMES
[WRM08]
8
Argument from Verification
Major Premise: If A (a hypothesis) is true, then B (a proposition reporting an
event) will be ob...
The New England
Iournal of Medicine
Copyright © 2002 by the Massachusetts Medical Society
VOLUME 347 N()VEMBER 7, 2002 NUM...
Support
11
From A Population-based Study of Measles, Mumps, and Rubella Vaccination and Autism by Madsen et al, The New En...
Support
What else should
be true if the
causal link is true?
Alternative explanation
Support
Support
12
structured argumentation
Argument and Computation, 2014 Taylor & Francis
V01. 5, N0. 1, 1-4, http://dx.d0i.0rg/10.1080/l9462l66.20l3.869764 Ta)/|or...
Argument and Computation, 2014
eTayloralprancis
Vol. 5, No. 1, 5—30, http://dx.doi.org/10.1080/1946216620]3.869765 Taylnr ...
base logic
Let L be a language for a logic, and let ⊢i be the consequence relation for that logic.
If α is an atom in L, t...
deductive argument
A deductive argument is an ordered pair ⟨Φ, α⟩ where Φ ⊢i α.
Φ is the support, or premises, or assumpti...
Here we focus on simple logic, but other options include non-monotonic logics,
conditional logics, temporal logics, descri...
simple logic
Simple logic is based on a language of literals and simple rules where each simple rule is
of the form α1 ∧ ....
arguments based on simple logic
Let ∆ be a simple logic knowledgebase. For Φ ⊆ ∆, and a literal α, ⟨Φ, α⟩ is a simple
argu...
rebut and undercut for simple logic
For simple arguments A and B, we consider the following type of simple attack:
∙ A is ...
Support
What else should
be true if the
causal link is true?
Alternative explanation
Support
Support
22
[Ver03]
23
[BL08] [PS13]
24
tableWakefield ∧ ¬alternativeExplanation ∧ casualLink → vaccinationCause
tableWakefield
casualLink
¬alternativeExplanation...
argument graphs
The flight is low cost and luxury, therefore it is a good flight
A flight cannot be both low cost and luxury
...
dung’s argumentation framework
Artificial
Intelligence
Arti cialIntelligence 77 (1995) 321v357
On the acceptability of arguments and its fundamental
role...
Definition 1
A Dung argumentation framework AF is a pair
⟨A, → ⟩
where A is a set of arguments, and → is a binary relation ...
A semantics is a way to identify sets of arguments (i.e. extensions)
“surviving the conflict together”
30
(some) semantics properties
wailah-la unlina at 1-Iwmnscianca-dira+:t.corn
':.i; Science-.Direct Ani gal
Intelligence:1
E....
(some) semantics properties
∙ Conflict-freeness (Def. 2)
an attacking and an attacked argument can not stay together (∅ is ...
(some) semantics properties
∙ Conflict-freeness (Def. 2)
∙ Admissibility (Def. 5)
the extension should be able to defend it...
(some) semantics properties
∙ Conflict-freeness (Def. 2)
∙ Admissibility (Def. 5)
∙ Strong-Admissibility (Def. 7)
no self-d...
(some) semantics properties
∙ Conflict-freeness (Def. 2)
∙ Admissibility (Def. 5)
∙ Strong-Admissibility (Def. 7)
∙ Reinsta...
(some) semantics properties
∙ Conflict-freeness (Def. 2)
∙ Admissibility (Def. 5)
∙ Strong-Admissibility (Def. 7)
∙ Reinsta...
(some) semantics properties
∙ Conflict-freeness (Def. 2)
∙ Admissibility (Def. 5)
∙ Strong-Admissibility (Def. 7)
∙ Reinsta...
complete extension (def. 15)
Admissibility and reinstatement
Set of conflict-free arguments s.t. each defended argument is ...
grounded extension (def. 16)
Strong Admissibility
Minimum complete extension
b a
c
d
f e
gh


 {a, c, e, g}



39
preferred extension (def. 17)
Admissibility and maximality
Maximum complete extensions
b a
c
d
f e
gh



{a, c, d, e,...
stable extension (def. 17)
„orror vacui:” the absence of odd-length cycles is a sufficient condition for existence of
stabl...
complete labellings (def. 20)
An argument is IN if all its attackers are OUT
An argument is OUT if at least one of its att...
complete labellings (def. 20)
Max. UNDEC ≡ Grounded
b a
c
d
f e
gh


 {a, c, e, g}



43
complete labellings (def. 20)
Max. IN ≡ Preferred
b a
c
d
f e
gh



{a, c, d, e, g}



44
complete labellings (def. 20)
Max. IN ≡ Preferred
b a
c
d
f e
gh



{a, b, c, e, g}



45
complete labellings (def. 20)
No UNDEC ≡ Stable
b a
c
d
f e
gh



{a, c, d, e, g}



46
complete labellings (def. 20)
No UNDEC ≡ Stable
b a
c
d
f e
gh



{a, b, c, e, g}



47
properties of semantics
CO GR PR ST
D-conflict-free Yes Yes Yes Yes
D-admissibility Yes Yes Yes Yes
D-strongly admissibilit...
Chapter 5
Complexity of Abstract Argumentation
Paul E. Dunne and Michael Wooldridge
I. Rahwan, G. R. Simari (cds.), Argunz...
σ = CO σ = GR σ = PR σ = ST
existsσ trivial trivial trivial np-c
caσ np-c polynomial np-c np-c
saσ polynomial polynomial Π...
an exercise
a
b c d
e
f
g
h
i
l
m
no
p
51
an exercise
a
b c d
e
f
g
h
i
l
m
no
p
ECO(∆) =



{a, c},
{a, c, f},
{a, c, m},
{a, c, f, m},
{a, c, f, ...
an exercise
a
b c d
e
f
g
h
i
l
m
no
p
EGR(∆) =



{a, c}



53
an exercise
a
b c d
e
f
g
h
i
l
m
no
p
EPR(∆) =



{a, c, f, m},
{a, c, f, l},
{a, c, g, m}


...
an exercise
a
b c d
e
f
g
h
i
l
m
no
p
EST (∆) =






55
decomposability
Arti cial Intelligence 2]? (2014) 144-197
Contents lists available at ScienceDirect
Arti cialIntelligence
...
decomposability
AF1
AF2
AF3
Is it possible to consider a (partial) argumentation framework as a black-box and focus
only o...
decomposability
A semantics is:
∙ Fully decomposable (Def. 29):
∙ any combination of “local” labellings gives rise to a gl...
decomposability
A semantics is:
∙ Fully decomposable (Def. 29):
∙ any combination of “local” labellings gives rise to a gl...
decomposability
A semantics is:
∙ Fully decomposable (Def. 29):
∙ any combination of “local” labellings gives rise to a gl...
decomposability
A semantics is:
∙ Fully decomposable (Def. 29):
∙ any combination of “local” labellings gives rise to a gl...
a semantic-web view of argumentation
The Knowledge Engineering Review, Vol. 26:4, 487—51 1. © Cambridge University Press, 2011
doi:10.1017/S0269888911000191
Re...
Node Graph
(argument
network)
has-a
Information
Node
(I-Node)
is-a
Scheme Node
S-Node
has-a
Edge
is-a
Rule of inference
ap...
l BY FLORIS BEX, JOHN LAWRENCE,
MARK SNAITH. AND CHRIS REED
Implementing
the Argument
Web
56 COMMUNICATIONS OF THE AGM OCT...
66 https://www.youtube.com/watch?v=KVDgH-g8_gU 
http://www.arg-tech.org/AIFdb/argview/4879
http://toast.arg-tech.org/
67
cispaces
Supporting Reasoning with Different Types of Evidence in
Intelligence Analysis
Alice Toniolo_ Anthony Etuk Robin Wentao Ou...
Research question: Evaluate the Jupiter intervention on a conflict ongoing on Mars
Research hypothesis: Is the Jupiter inte...
Sensemaking
Agent
Data Request/
Crowdsourcing
Agent
Provenance
Agent
GUI Interface ToolBox
WorkBoxInfoBox ReqBox
ChatBox
71
sensemaking agent and walton’s argumentation schemes
Argument from Cause to Effect
Major Premise: Generally, if A occurs, ...
Jupiter troops
deliver aids to
Martians
Jupiter intervention
on Mars is
humanitarian
PRO
Agreement to
exchange crude oil
f...
Jupiter troops
deliver aids to
Martians
Jupiter intervention
on Mars is
humanitarian
PRO
Agreement to
exchange crude oil
f...
Jupiter troops
deliver aids to
Martians
Jupiter intervention
on Mars is
humanitarian
PRO
Agreement to
exchange crude oil
f...
Jupiter troops
deliver aids to
Martians
Jupiter intervention
on Mars is
humanitarian
PRO
Agreement to
exchange crude oil
f...
knowledge base
Kp = { aid;
oil;
doctrine;
technique;
noevidence;
artillery; }
Rd = { aid =⇒ humanitarian;
oil =⇒ strategic...
from knowledge base to argument graph
Kp = { aid;
oil;
doctrine;
technique;
noevidence;
artillery; }
Rd = { aid =⇒ humanit...
aida1: aid
aida2: a1 ⇒ humanitarian
aida3: oil
aida4: a3 ⇒ strategic
aida5: doctrine
aida6: technique
aida7: a5 ∧ a6 ⇒ cas...
Jupiter troops
deliver aids to
Martians
Jupiter intervention
on Mars is
humanitarian
PRO
Agreement to
exchange crude oil
f...
Sensemaking
Agent
Data Request/
Crowdsourcing
Agent
Provenance
Agent
GUI Interface ToolBox
WorkBoxInfoBox ReqBox
ChatBox
81
82
the frontier
belief revision and argumentation
Chapter 17
Belief Revision and Argumentation Theory
Marcelo A. Falappa, Gabriele Kern—Is...
belief revision and argumentation
Potential cross-fertilisation
Argumentation in Belief Revision
∙ Justification-based trut...
argument mining
Con1pimm'o11m' M0del.s' ofArg.umem
B. Verhetj er (.21. (Eds.)
105 Press, 2012
2012 The authom and IOS Pres...
argument mining
http://www-sop.inria.fr/NoDE/
http://corpora.aifdb.org/
87
argumentation and humans
Proceedings of the Twenty—Ninth AAAI Conference on Arti cialIntelligence
Providing Arguments in D...
argumentation and humans
[Hun16]
89
natural language interfaces
ECAI 2014 207
T. Schmtb et at. {Eris}
2014 The Amhors and 105 Press.
This arrirrle is pub1'ish...
How can we create an human understandable interface to defeasible reasoning
in order to guarantee that human users will ag...
a1 : σA ⇒ γ
a2 : σB ⇒ ¬γ
a3 : ⇒ a1 ≺ a2
First Scenario
a1: Alice suggests to move in together with Jane
a2: Stacy suggests...
93 https://www.darpa.mil/program/explainable-artificial-intelligence 
conclusion
95 https://www.youtube.com/watch?v=HwBmPiOmEGQ 
96
Upcoming SlideShare
Loading in …5
×

Argumentation in Artificial Intelligence: From Theory to Practice

388 views

Published on

Argumentation technology is a rich interdisciplinary area of research that, in the last two decades, has emerged as one of the most promising paradigms for commonsense reasoning and conflict resolution in a great variety of domains.
In this tutorial we aim at providing PhD students, early stage researchers, and experts from different fields of AI with a clear understanding of argumentation in AI and with a set of tools they can start using in order to advance the field.
Part 1 of 2

Published in: Science
  • Be the first to comment

  • Be the first to like this

Argumentation in Artificial Intelligence: From Theory to Practice

  1. 1. argumentation in artificial intelligence From Theory to Practice Federico Cerutti† and Mauro Vallati‡ xxi • viii • mmxvii † Cardiff University • ‡ University of Huddersfield
  2. 2. 2
  3. 3. P. Baroni T. Bench-Capon P. Dunne M. Giacomin A. Hunter T. Norman C. Reed A. Toniolo S. Woltran 3
  4. 4. why bother?
  5. 5. Does MMR vaccination cause autism? 5
  6. 6. EARLY REPORT Early report lleal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children A J Wake eld, S H Murch, A Anthony, J Linnell, D M Casson, M Malik, M Berelowitz, A P Dhillon, M A Thomson, P Harvey, A Valentine, 5 E Davies, J A Walker-Smith 5|-|mma|'Y Introduction 1177 " °9W several children Who, after a nP"" ' "‘ investigated a conser""' _m;mAn1".,,, 6
  7. 7. Support What else should be true if the causal link is true? Alternative explanation 7 From Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children by Wakefield et al, The Lancet, 1998
  8. 8. Douglas Walton Chris Reed Fabrizio Macagno ARGUMENTATION SCHEMES [WRM08] 8
  9. 9. Argument from Verification Major Premise: If A (a hypothesis) is true, then B (a proposition reporting an event) will be observed to be true. Minor Premise: B has been observed to be true, in a given instance. Conclusion: Therefore, A is true. Critical Questions CQ1: Is it the case that if A is true, then B is true? CQ2: Has B been observed to be true (false)? CQ3: Could there be some reason why B is true, other than its being because of A being true? Connection between critical questions, objectivity, and burden of proof Unclear connection on uncertainty assessment 9
  10. 10. The New England Iournal of Medicine Copyright © 2002 by the Massachusetts Medical Society VOLUME 347 N()VEMBER 7, 2002 NUMBER 19 A POPULATION-BASED STUDY OF MEASLES, MUMPS, AND RUBELLA VACCINATION AND AUTISM KREESTEN MELDGAARD MADSEN, M.D., ANDERS HVIID, M.Sc., MOGENS VESTERGAARD, M.D., DIANA SCHENDEL, PH.D., JAN WOHLFAHRT, M.Sc., POUL THORSEN, M.D., J(ZiRN OLSEN, M.D., AND MADS MELBYE, M.D. ABS""‘ I 7 "Tested that the measle ' +hat vaccina— ”“CCi11C C3“’ -nn- ’ 10
  11. 11. Support 11 From A Population-based Study of Measles, Mumps, and Rubella Vaccination and Autism by Madsen et al, The New England Journal of Medicine, 2002
  12. 12. Support What else should be true if the causal link is true? Alternative explanation Support Support 12
  13. 13. structured argumentation
  14. 14. Argument and Computation, 2014 Taylor & Francis V01. 5, N0. 1, 1-4, http://dx.d0i.0rg/10.1080/l9462l66.20l3.869764 Ta)/|or&FrancisGroup Introduction to structured argumentation Philippe Besnard“, Alejandro Garciab, Anthony Hunter”, Sanj ay Modgild, Henry Pralckenef, Guillermo Simarib and Francesca Tonig [Bes+14] 14
  15. 15. Argument and Computation, 2014 eTayloralprancis Vol. 5, No. 1, 5—30, http://dx.doi.org/10.1080/1946216620]3.869765 Taylnr S. franm Cxnuu Constructing argument graphs with deductive arguments: a tutorial Philippe Besnard“ and Anthony Hunter“ [BH14] 15
  16. 16. base logic Let L be a language for a logic, and let ⊢i be the consequence relation for that logic. If α is an atom in L, then α is a positive literal in L and ¬α is a negative literal in L. For a literal β, the complement of β is defined as follows: ∙ If β is a positive literal, i.e. it is of the form α, then the complement of β is the negative literal ¬α, ∙ if β is a negative literal, i.e. it is of the form ¬α, then the complement of β is the positive literal α. 16
  17. 17. deductive argument A deductive argument is an ordered pair ⟨Φ, α⟩ where Φ ⊢i α. Φ is the support, or premises, or assumptions of the argument, and α is the claim, or conclusion, of the argument. For an argument A = ⟨Φ, α⟩, the function Support(A) returns Φ and the function Claim(A) returns α. ⟨{report(rain), report(rain) → carry(umbrella)}, carry(umbrella)⟩ 17
  18. 18. Here we focus on simple logic, but other options include non-monotonic logics, conditional logics, temporal logics, description logics, and paraconsistent logics. 18
  19. 19. simple logic Simple logic is based on a language of literals and simple rules where each simple rule is of the form α1 ∧ . . . ∧ αk → β where α1 to αk and β are literals. The consequence relation is modus ponens (i.e. implication elimination): ∆ ⊢s β iff there is an α1 ∧ · · · ∧ αn → β ∈ ∆ and for each αi ∈ {α1, . . . , αn} either αi ∈ ∆ or ∆ ⊢s αi Let ∆ = {a, b, a ∧ b → c, c → d}. Hence, ∆ ⊢s c and ∆ ⊢s d. However, ∆ ̸⊢s a and ∆ ̸⊢s b. 19
  20. 20. arguments based on simple logic Let ∆ be a simple logic knowledgebase. For Φ ⊆ ∆, and a literal α, ⟨Φ, α⟩ is a simple argument iff Φ ⊢s α and there is no proper subset Φ′ of Φ such that Φ′ ⊢s α. Let p1, p2, and p3 be the following formulae. p1 = oilCompany(BP) p2 = goodPerformer(BP) p3 = oilCompany(BP) ∧ goodPerformer(BP)) → goodInvestment(BP) Then ⟨{p1, p2, p3}, goodInvestment(BP)⟩ is a simple argument. 20
  21. 21. rebut and undercut for simple logic For simple arguments A and B, we consider the following type of simple attack: ∙ A is a simple undercut of B if there is a simple rule α1 ∧ · · · ∧ αn → β in Support(B) and there is an αi ∈ {α1, . . . , αn} such that Claim(A) is the complement of αi ∙ A is a simple rebut of B if Claim(A) is the complement of Claim(B) A1 = ⟨{efficientMetro, efficientMetro → useMetro}, useMetro⟩ A2 = ⟨{strikeMetro, strikeMetro → ¬efficientMetro}, ¬efficientMetro⟩ A3 = ⟨{govDeficit, govDeficit → cutGovSpending}, cutGovSpending⟩ A4 = ⟨{weakEconomy, weakEconomy → ¬cutGovSpending}, ¬cutGovSpending⟩ 21
  22. 22. Support What else should be true if the causal link is true? Alternative explanation Support Support 22
  23. 23. [Ver03] 23
  24. 24. [BL08] [PS13] 24
  25. 25. tableWakefield ∧ ¬alternativeExplanation ∧ casualLink → vaccinationCause tableWakefield casualLink ¬alternativeExplanation ¬associations → ¬casualLink largeStudy → ¬associations largeStudy A1 = ⟨{largeStudy, largeStudy → ¬associations, ¬associations → ¬casualLinks}, ¬casualLinks⟩ A2 = ⟨{tableWakefield, ¬alternativeExlpanations, casualLink, tableWakefield ∧ ¬alternativeExplanation ∧ casualLink → vaccinationCause}, vaccinationCause⟩ 25
  26. 26. argument graphs The flight is low cost and luxury, therefore it is a good flight A flight cannot be both low cost and luxury A1 = ⟨{lowCostFly, luxuryFly, lowCostFly ∧ luxuryFly → goodFly}, goodFly⟩ A2 = ⟨{¬(lowCostFly ∧ luxuryFly)}, ¬lowCostFly ∨ ¬luxuryFly⟩ 26
  27. 27. dung’s argumentation framework
  28. 28. Artificial Intelligence Arti cialIntelligence 77 (1995) 321v357 On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games* Phan Minh Dung* [Dun95] 28
  29. 29. Definition 1 A Dung argumentation framework AF is a pair ⟨A, → ⟩ where A is a set of arguments, and → is a binary relation on A i.e. →⊆ A × A. 29
  30. 30. A semantics is a way to identify sets of arguments (i.e. extensions) “surviving the conflict together” 30
  31. 31. (some) semantics properties wailah-la unlina at 1-Iwmnscianca-dira+:t.corn ':.i; Science-.Direct Ani gal Intelligence:1 E.LSI:'."v'lI:'.R. .eu:i:'.u'.-in Jnl::||igI:n»;::: m izrocm n75—:':m www.r:I:i::1.r'icr.r:nn1.-'|m::3n:.':3r1iI11 On principle-based evaluation of extension-based argumentation semantics ii’ Pietra Bamni, Massimiliano Giacomin * [BG07] The Kn0w[ed'ge Engineering Review, Vol. 26:4, 365-410. © Cambridge University Press, 2011 doi:10.1017J/S0269888911000166 An introduction to argumentation semantics PIETRO BARONI‘, MARTIN CAMINADA2 and MASSIMILIANO GlACOMIN' [BCG11] 31
  32. 32. (some) semantics properties ∙ Conflict-freeness (Def. 2) an attacking and an attacked argument can not stay together (∅ is c.f. by def.) ∙ Admissibility (Def. 5) ∙ Strong-Admissibility (Def. 7) ∙ Reinstatement (Def. 8) ∙ I-Maximality (Def. 9) ∙ Directionality (Def. 12) 32
  33. 33. (some) semantics properties ∙ Conflict-freeness (Def. 2) ∙ Admissibility (Def. 5) the extension should be able to defend itself, „fight fire with fire” (∅ is adm. by def.) ∙ Strong-Admissibility (Def. 7) ∙ Reinstatement (Def. 8) ∙ I-Maximality (Def. 9) ∙ Directionality (Def. 12) 33
  34. 34. (some) semantics properties ∙ Conflict-freeness (Def. 2) ∙ Admissibility (Def. 5) ∙ Strong-Admissibility (Def. 7) no self-defeating arguments (∅ is strong adm. by def.) ∙ Reinstatement (Def. 8) ∙ I-Maximality (Def. 9) ∙ Directionality (Def. 12) 34
  35. 35. (some) semantics properties ∙ Conflict-freeness (Def. 2) ∙ Admissibility (Def. 5) ∙ Strong-Admissibility (Def. 7) ∙ Reinstatement (Def. 8) if you defend some argument you should take it on board (∅ satisfies the principle only if there are no unattacked arguments) ∙ I-Maximality (Def. 9) ∙ Directionality (Def. 12) 35
  36. 36. (some) semantics properties ∙ Conflict-freeness (Def. 2) ∙ Admissibility (Def. 5) ∙ Strong-Admissibility (Def. 7) ∙ Reinstatement (Def. 8) ∙ I-Maximality (Def. 9) no extension is a proper subset of another one ∙ Directionality (Def. 12) 36
  37. 37. (some) semantics properties ∙ Conflict-freeness (Def. 2) ∙ Admissibility (Def. 5) ∙ Strong-Admissibility (Def. 7) ∙ Reinstatement (Def. 8) ∙ I-Maximality (Def. 9) ∙ Directionality (Def. 12) a (set of) argument(s) is affected only by its ancestors in the attack relation 37
  38. 38. complete extension (def. 15) Admissibility and reinstatement Set of conflict-free arguments s.t. each defended argument is included b a c d f e gh    {a, c, d, e, g}, {a, b, c, e, g}, {a, c, e, g}    38
  39. 39. grounded extension (def. 16) Strong Admissibility Minimum complete extension b a c d f e gh    {a, c, e, g}    39
  40. 40. preferred extension (def. 17) Admissibility and maximality Maximum complete extensions b a c d f e gh    {a, c, d, e, g}, {a, b, c, e, g}    40
  41. 41. stable extension (def. 17) „orror vacui:” the absence of odd-length cycles is a sufficient condition for existence of stable extensions Complete extensions attacking all the arguments outside b a c d f e gh    {a, c, d, e, g}, {a, b, c, e, g}    41
  42. 42. complete labellings (def. 20) An argument is IN if all its attackers are OUT An argument is OUT if at least one of its attackers is IN Otherwise is UNDEC 42
  43. 43. complete labellings (def. 20) Max. UNDEC ≡ Grounded b a c d f e gh    {a, c, e, g}    43
  44. 44. complete labellings (def. 20) Max. IN ≡ Preferred b a c d f e gh    {a, c, d, e, g}    44
  45. 45. complete labellings (def. 20) Max. IN ≡ Preferred b a c d f e gh    {a, b, c, e, g}    45
  46. 46. complete labellings (def. 20) No UNDEC ≡ Stable b a c d f e gh    {a, c, d, e, g}    46
  47. 47. complete labellings (def. 20) No UNDEC ≡ Stable b a c d f e gh    {a, b, c, e, g}    47
  48. 48. properties of semantics CO GR PR ST D-conflict-free Yes Yes Yes Yes D-admissibility Yes Yes Yes Yes D-strongly admissibility No Yes No No D-reinstatement Yes Yes Yes Yes D-I-maximality No Yes Yes Yes D-directionality Yes Yes Yes No 48
  49. 49. Chapter 5 Complexity of Abstract Argumentation Paul E. Dunne and Michael Wooldridge I. Rahwan, G. R. Simari (cds.), Argunzerztarion in Ar‘!1j‘icial Intelligence, DO] 10.1007/978—0—387—98197'-0-5. © Springer SCience+Business Media. LLC 2009 [DW09] 49
  50. 50. σ = CO σ = GR σ = PR σ = ST existsσ trivial trivial trivial np-c caσ np-c polynomial np-c np-c saσ polynomial polynomial Πp 2 -c conp-c verσ polynomial polynomial conp-c polynomial neσ np-c polynomial np-c np-c 50
  51. 51. an exercise a b c d e f g h i l m no p 51
  52. 52. an exercise a b c d e f g h i l m no p ECO(∆) =    {a, c}, {a, c, f}, {a, c, m}, {a, c, f, m}, {a, c, f, l}, {a, c, g, m}    52
  53. 53. an exercise a b c d e f g h i l m no p EGR(∆) =    {a, c}    53
  54. 54. an exercise a b c d e f g h i l m no p EPR(∆) =    {a, c, f, m}, {a, c, f, l}, {a, c, g, m}    54
  55. 55. an exercise a b c d e f g h i l m no p EST (∆) =       55
  56. 56. decomposability Arti cial Intelligence 2]? (2014) 144-197 Contents lists available at ScienceDirect Arti cialIntelligence www.e|sevier.c0 m/locate/a rtint On the Input/Output behavior of argumentation frameworks Pietro Baronia. Guido Boellab, Federico Cerutti C, Massimiliano Giacomin ‘1~*, Leendert van der Torred, Serena Villatae ®CrossMark [Bar+14] 56
  57. 57. decomposability AF1 AF2 AF3 Is it possible to consider a (partial) argumentation framework as a black-box and focus only on the input/output interface? 57
  58. 58. decomposability A semantics is: ∙ Fully decomposable (Def. 29): ∙ any combination of “local” labellings gives rise to a global labelling; ∙ any global labelling arises from a set of “local” labellings ∙ Top-Down decomposable (Def. 28): combining “local” labellings you get all global labellings, possibly more ∙ Bottom-Up decomposable (Def. 27): combining “local” labellings you get only global labellings, possibly less 58
  59. 59. decomposability A semantics is: ∙ Fully decomposable (Def. 29): ∙ any combination of “local” labellings gives rise to a global labelling; ∙ any global labelling arises from a set of “local” labellings ∙ Top-Down decomposable (Def. 28): combining “local” labellings you get all global labellings, possibly more ∙ Bottom-Up decomposable (Def. 27): combining “local” labellings you get only global labellings, possibly less 59
  60. 60. decomposability A semantics is: ∙ Fully decomposable (Def. 29): ∙ any combination of “local” labellings gives rise to a global labelling; ∙ any global labelling arises from a set of “local” labellings ∙ Top-Down decomposable (Def. 28): combining “local” labellings you get all global labellings, possibly more ∙ Bottom-Up decomposable (Def. 27): combining “local” labellings you get only global labellings, possibly less 60
  61. 61. decomposability A semantics is: ∙ Fully decomposable (Def. 29): ∙ any combination of “local” labellings gives rise to a global labelling; ∙ any global labelling arises from a set of “local” labellings ∙ Top-Down decomposable (Def. 28): combining “local” labellings you get all global labellings, possibly more ∙ Bottom-Up decomposable (Def. 27): combining “local” labellings you get only global labellings, possibly less CO ST GR PR Full decomposability Yes Yes No No Top-down decomposability Yes Yes Yes Yes Bottom-up decomposability Yes Yes No No 61
  62. 62. a semantic-web view of argumentation
  63. 63. The Knowledge Engineering Review, Vol. 26:4, 487—51 1. © Cambridge University Press, 2011 doi:10.1017/S0269888911000191 Representing and classifying arguments on the Semantic Web IYAD RAHWAN1‘2, B_ITA BANIHASHEMI3, CHRIS REED4, DOUGLAS WALTON” and SHERIEF ABDALLAH” [Rah+11] 63
  64. 64. Node Graph (argument network) has-a Information Node (I-Node) is-a Scheme Node S-Node has-a Edge is-a Rule of inference application node (RA-Node) Conflict application node (CA-Node) Preference application node (PA-Node) Derived concept application node (e.g. defeat) is-a ... ContextScheme Conflict scheme contained-in Rule of inference scheme Logical inference scheme Presumptive inference scheme ... is-a Logical conflict scheme is-a ... Preference scheme Logical preference scheme is-a ... Presumptive preference scheme is-a uses uses uses 64
  65. 65. l BY FLORIS BEX, JOHN LAWRENCE, MARK SNAITH. AND CHRIS REED Implementing the Argument Web 56 COMMUNICATIONS OF THE AGM OCTOBER 2[]l3 VOL‘ 56 N0.1D [Bex+13] 65
  66. 66. 66 https://www.youtube.com/watch?v=KVDgH-g8_gU 
  67. 67. http://www.arg-tech.org/AIFdb/argview/4879 http://toast.arg-tech.org/ 67
  68. 68. cispaces
  69. 69. Supporting Reasoning with Different Types of Evidence in Intelligence Analysis Alice Toniolo_ Anthony Etuk Robin Wentao Ouyang Tlmothy J- N0Fman Federico Cerutti Mani Srivastava DBPL 0f_C0ml3U“”Q SCIENCE Dept. of Computing Science University of California University of Aberdeen, UK University of Aberdeen, UK Los Angeles, CA, USA Nir Oren Timothy Dropps Paul Sullivan Dept. of Computing Science John A_ Allen INTELPOINT Incorporated University of Aberdeen, UK Honeywell, USA Pennsylvania, USA Appears in: Proceedings of the 14th International Conference on Autonomous Agents and ll/Iultiayent Systems (AAJWAS 2015), Bordim, Elkind, Was.-3, Yolum (ed5.), Mlay 4 8, 2015, Istcmbttl, Turkey. [Ton+15] 69
  70. 70. Research question: Evaluate the Jupiter intervention on a conflict ongoing on Mars Research hypothesis: Is the Jupiter intervention on Mars humanitarian or strategical? Data gathering: beyond the scope of this work Justification of possible hypotheses based on data and logic The example considered in this version of the slides clarifies some misunderstandings raised during the presentation and hopefully reduces the elements of controversy 70
  71. 71. Sensemaking Agent Data Request/ Crowdsourcing Agent Provenance Agent GUI Interface ToolBox WorkBoxInfoBox ReqBox ChatBox 71
  72. 72. sensemaking agent and walton’s argumentation schemes Argument from Cause to Effect Major Premise: Generally, if A occurs, then B will (might) occur. Minor Premise: In this case, A occurs (might occur). Conclusion: Therefore, in this case, B will (might) occur. Critical questions CQ1: How strong is the causal generalisation? CQ2: Is the evidence cited (if there is any) strong enough to warrant the causal generalisation? CQ3: Are there other causal factors that could interfere with the production of the effect in the given case? 72
  73. 73. Jupiter troops deliver aids to Martians Jupiter intervention on Mars is humanitarian PRO Agreement to exchange crude oil for refined petroleum Jupiter intervention on Mars aims at protecting strategic assets PRO CON CON 73
  74. 74. Jupiter troops deliver aids to Martians Jupiter intervention on Mars is humanitarian PRO Agreement to exchange crude oil for refined petroleum Jupiter intervention on Mars aims at protecting strategic assets PRO CON CON Civilian casualties caused by Jupiter forces CON LCE Use of old Jupiter military doctrine causes civilian casualties Large use of old Jupiter military techniques on Mars 74
  75. 75. Jupiter troops deliver aids to Martians Jupiter intervention on Mars is humanitarian PRO Agreement to exchange crude oil for refined petroleum Jupiter intervention on Mars aims at protecting strategic assets PRO CON CON Civilian casualties caused by Jupiter forces CON LCE Use of old Jupiter military doctrine causes civilian casualties Large use of old Jupiter military techniques on Mars CQ2 There is no evidence to show that the cause occurred 75
  76. 76. Jupiter troops deliver aids to Martians Jupiter intervention on Mars is humanitarian PRO Agreement to exchange crude oil for refined petroleum Jupiter intervention on Mars aims at protecting strategic assets PRO CON CON Civilian casualties caused by Jupiter forces CON LCE Use of old Jupiter military doctrine causes civilian casualties Large use of old Jupiter military techniques on Mars CQ2 There is no evidence to show that the cause occurred CON Use of massive aerial and artillery strikes 76
  77. 77. knowledge base Kp = { aid; oil; doctrine; technique; noevidence; artillery; } Rd = { aid =⇒ humanitarian; oil =⇒ strategic; doctrine ∧ technique =⇒ casualties; } humanitarian = −strategic casualties = ^humanitarian noevidence = ^technique artillery = ^noevidence 77
  78. 78. from knowledge base to argument graph Kp = { aid; oil; doctrine; technique; noevidence; artillery; } Rd = { aid =⇒ humanitarian; oil =⇒ strategic; doctrine ∧ technique =⇒ casualties; } humanitarian = −strategic casualties = ^humanitarian noevidence = ^technique artillery = ^noevidence aida1: aid aida2: a1 ⇒ humanitarian aida3: oil aida4: a3 ⇒ strategic aida5: doctrine aida6: technique aida7: a5 ∧ a6 ⇒ casualties aida8: noevidence aida9: artillery *Prakken, H. (2010). An abstract framework for argumentation with structured arguments. Argument & Computation, 1(2):93–124. 78
  79. 79. aida1: aid aida2: a1 ⇒ humanitarian aida3: oil aida4: a3 ⇒ strategic aida5: doctrine aida6: technique aida7: a5 ∧ a6 ⇒ casualties aida8: noevidence aida9: artillery 79
  80. 80. Jupiter troops deliver aids to Martians Jupiter intervention on Mars is humanitarian PRO Agreement to exchange crude oil for refined petroleum Jupiter intervention on Mars aims at protecting strategic assets PRO CON CON Civilian casualties caused by Jupiter forces CON LCE Use of old Jupiter military doctrine causes civilian casualties Large use of old Jupiter military techniques on Mars CQ2 There is no evidence to show that the cause occurred CON Use of massive aerial and artillery strikes 80
  81. 81. Sensemaking Agent Data Request/ Crowdsourcing Agent Provenance Agent GUI Interface ToolBox WorkBoxInfoBox ReqBox ChatBox 81
  82. 82. 82
  83. 83. the frontier
  84. 84. belief revision and argumentation Chapter 17 Belief Revision and Argumentation Theory Marcelo A. Falappa, Gabriele Kern—Isberner and Guillermo R. Simari I. Rahwan, G. R. Simari (cds.), Argunzerztarion in Ar‘!1j‘icial Intelligence, DO] 10.1007/978-0—387—98197'-0-17, © Springer Science+Business Media, LLC 2009 [FKS09] Logic and Cognitive Systems Series Editors: D. Gabbay, ._J. Siekmann. ._J. van Benther and ._J. Woods Trends in Belief Revision and Ar-gumentation Dynamics [FGS13] 84
  85. 85. belief revision and argumentation Potential cross-fertilisation Argumentation in Belief Revision ∙ Justification-based truth maintenance system ∙ Assumption-based truth maintenance system Some conceptual differences: in revision, external beliefs are compared with internal beliefs and, after a selection process, some sentences are discarded, other ones are accepted. [FKS09] Belief Revision in Argumentation ∙ Changing by adding or deleting an argument. ∙ Changing by adding or deleting a set of arguments. ∙ Changing the attack (and/or defeat) relation among arguments. ∙ Changing the status of beliefs (as conclusions of arguments). ∙ Changing the type of an argument (from strict to defeasible, or vice versa). 85
  86. 86. argument mining Con1pimm'o11m' M0del.s' ofArg.umem B. Verhetj er (.21. (Eds.) 105 Press, 2012 2012 The authom and IOS Press‘. All rights reserved. daisi0.323_?/9781'—t5I499—HI—3—454 Generating Abstract Arguments: a Natural Language Approach Elena CABRIO and Serena VILLATA [CV12] Cbmptrrational Models QfArgumer1r [85 S. Par.s'on.s' er al. (Eds.) IOS Press, 2014 20M The authors and 1'05 Pram". A1'! rights reserved. dais1U.3233/978-I-6i'499—436- 7-185 Towards Argument Mining from Dialogue Katarzyna BUDZYNSKA Mathilde JANIER *2 Juyeon KANG“, Chris REED h, Patrick SAINT—DIZIER d, Manfred STEDE°, and Olena YASKORSKA if [Bud+14] 86
  87. 87. argument mining http://www-sop.inria.fr/NoDE/ http://corpora.aifdb.org/ 87
  88. 88. argumentation and humans Proceedings of the Twenty—Ninth AAAI Conference on Arti cialIntelligence Providing Arguments in Discussions Based on the Prediction of Human Argumentative Behavior* Ariel Rosenfeld and Sarit Kraus Department of Computer Science Bar-Ilan University, Ramat-Gan, Israel 92500 rosenfa5@cs.biu.ac.il , sarit@cs.biu.ac.il [RK15] Providing Arguments in Discussions on the Basis of the Prediction of Human Argumentative Behavior ARIEL ROSENFELD and SARIT KRAUS, Bar—llan University ACM Transactions on Interactive Intelligent Systems, Vol. 6, No. 4, Article 30, Publication date: December 2016. [RK16] 88
  89. 89. argumentation and humans [Hun16] 89
  90. 90. natural language interfaces ECAI 2014 207 T. Schmtb et at. {Eris} 2014 The Amhors and 105 Press. This arrirrle is pub1'ishecI rmline with Open Ar:(:e.s.' by 10.5‘ Prams and,’ dr'.':rr'bute'r1 under the terms qf'.'he Creatiw Comirmnts Arriibminn Non—C0mmercim' License. a'r>.":i0.3233/978-I-61499-419-0-207 Formal Arguments, Preferences, and Natural Language Interfaces to Humans: an Empirical Evaluation Federico Cerutti and Nava Tintarev and NirOren1 [CTO14] 90
  91. 91. How can we create an human understandable interface to defeasible reasoning in order to guarantee that human users will agree with the result of the automated reasoning procedures? 91
  92. 92. a1 : σA ⇒ γ a2 : σB ⇒ ¬γ a3 : ⇒ a1 ≺ a2 First Scenario a1: Alice suggests to move in together with Jane a2: Stacy suggests otherwise because Jane might have a hidden agenda a3: Stacy is your best friend a1 a2 don’t know agreement 12.5 68.8 18.8 • • • • • Second Scenario a1: TV1 suggests that tomorrow will rain a2: TV2 suggests that tomorrow will be cloudy but will not rain a3: TV2 is generally more accurate than TV1 a1 a2 don’t know agreement 5.0 50.0 45.0 92
  93. 93. 93 https://www.darpa.mil/program/explainable-artificial-intelligence 
  94. 94. conclusion
  95. 95. 95 https://www.youtube.com/watch?v=HwBmPiOmEGQ 
  96. 96. 96

×