argumentation in artificial intelligence
20 Years After Dung’s Work
.
Federico Cerutti†
xxvi • vii • mmxv
†
University of Aberdeen
P. Baroni
U. Brescia
T. J. M. Bench-Capon
U. Liverpool
C. Cayrol
IRIT
P. E. Dunne
U. Liverpool
M. Giacomin
U. Brescia
A. Hunter
UCL
H. Li
U. Aberdeen
S. Modgil
KCL
T. J. Norman
U. Aberdeen
N. Oren
U. Aberdeen
C. Reed
U. Dundee
G. R. Simari
U. Nacional der Sur
A. Toniolo
U. Aberdeen
M. Vallati
U. Huddersfield
S. Woltran
TU Wien
J. Leite
New U. Lisbon
S. Parson
KCL
M. Thimm
U. Koblenz
This tutorial was sponsored by the U.S. Army Research Laboratory and the U.K.
Ministry of Defence, under Agreement Number W911NF-06-3-0001. The views
and conclusions contained in this document are those of the author(s) and
should not be interpreted as representing the official policies, either expressed
or implied, of the U.S. Army Research Laboratory, the U.S. Government, the U.K.
Ministry of Defence or the U.K. Government. The U.S. and U.K. Governments are
authorized to reproduce and distribute reprints for Government purposes
notwithstanding any copyright notation hereon.
The tutor acknowledges the contribution of the Santander Universities Network
in supporting his travel
outline
∙ Introduction Why bother?
∙ Dung’s AF
∙ Argumentation Schemes
∙ A Semantic-Web view of Argumentation
∙ Frameworks
∙ CISpaces
∙ Algorithms and Implementations
∙ The frontier
outline
∙ Introduction
∙ Dung’s AF Syntax, semantics, current state of research
∙ Argumentation Schemes
∙ A Semantic-Web view of Argumentation
∙ Frameworks
∙ CISpaces
∙ Algorithms and Implementations
∙ The frontier
outline
∙ Introduction
∙ Dung’s AF
∙ Argumentation Schemes Arguments in human experience
∙ A Semantic-Web view of Argumentation
∙ Frameworks
∙ CISpaces
∙ Algorithms and Implementations
∙ The frontier
outline
∙ Introduction
∙ Dung’s AF
∙ Argumentation Schemes
∙ A Semantic-Web view of Argumentation AIF, OVA+, and other tools
∙ Frameworks
∙ CISpaces
∙ Algorithms and Implementations
∙ The frontier
outline
∙ Introduction
∙ Dung’s AF
∙ Argumentation Schemes
∙ A Semantic-Web view of Argumentation
∙ Frameworks Abstract, instantiated, probabilistic frameworks: kite-level view
∙ CISpaces
∙ Algorithms and Implementations
∙ The frontier
outline
∙ Introduction
∙ Dung’s AF
∙ Argumentation Schemes
∙ A Semantic-Web view of Argumentation
∙ Frameworks
∙ CISpaces „One Ring to bring them all and in the darkness bind them”
∙ Algorithms and Implementations
∙ The frontier
outline
∙ Introduction
∙ Dung’s AF
∙ Argumentation Schemes
∙ A Semantic-Web view of Argumentation
∙ Frameworks
∙ CISpaces
∙ Algorithms and Implementations …and how to choose among them
∙ The frontier
outline
∙ Introduction
∙ Dung’s AF
∙ Argumentation Schemes
∙ A Semantic-Web view of Argumentation
∙ Frameworks
∙ CISpaces
∙ Algorithms and Implementations
∙ The frontier
what is missing
A lot
Dialogues
Argumentation and trust
Argumentation in multi-agent systems
Several approaches to represent arguments
Several extensions to Dung’s framework
Several frontier approaches
…
..why bother?
There is no milk in the shop
and the milk you have is sour.
Beer Milk
1 0
There is a coffee machine and
fresh coffee in the cupboard.
Beer makes you sick
Beer Milk Coffee?
0 0 1
There is fresh milk in your
bag because you went to the
shop earlier.
The Principal is visiting later
today, so you had better not
alcohol
Beer Milk
0 1
There is no milk in the shop
and the milk you have is sour.
There is a coffee machine and
fresh coffee in the cupboard.
Beer makes you sick
There is fresh milk in your
bag because you went to the
shop earlier.
The Principal is visiting later
today, so you had better not
alcohol
Beer Milk Coffee?
1 0
0 0 1
0 1
You should drink milk
You should drink beer
There is no milk in the shop
and the milk you have is sour.
There is a coffee machine and
fresh coffee in the cupboard.
Beer makes you sick
You should drink coffee
There is fresh milk in your
bag because you went to the
shop earlier.
The Principal is
visiting later today, so
you had better not
..dung’s argumentation framework
[Dun95]
Definition 1
A Dung argumentation framework AF is a pair
⟨A, → ⟩
where A is a set of arguments, and → is a binary relation on A i.e. →⊆ A × A.
A semantics is a way to identify sets of arguments (i.e. extensions)
“surviving the conflict together”
(some) semantics properties
[BG07] [BCG11]
(some) semantics properties
∙ Conflict-freeness (Def. 2)
an attacking and an attacked argument can not stay together (∅ is c.f. by def.)
∙ Admissibility (Def. 5)
∙ Strong-Admissibility (Def. 7)
∙ Reinstatement (Def. 8)
∙ I-Maximality (Def. 9)
∙ Directionality (Def. 12)
(some) semantics properties
∙ Conflict-freeness (Def. 2)
∙ Admissibility (Def. 5)
the extension should be able to defend itself, „fight fire with fire” (∅ is adm. by def.)
∙ Strong-Admissibility (Def. 7)
∙ Reinstatement (Def. 8)
∙ I-Maximality (Def. 9)
∙ Directionality (Def. 12)
(some) semantics properties
∙ Conflict-freeness (Def. 2)
∙ Admissibility (Def. 5)
∙ Strong-Admissibility (Def. 7)
no self-defeating arguments (∅ is strong adm. by def.)
∙ Reinstatement (Def. 8)
∙ I-Maximality (Def. 9)
∙ Directionality (Def. 12)
(some) semantics properties
∙ Conflict-freeness (Def. 2)
∙ Admissibility (Def. 5)
∙ Strong-Admissibility (Def. 7)
∙ Reinstatement (Def. 8)
if you defend some argument you should take it on board (∅ satisfies the principle
only if there are no unattacked arguments)
∙ I-Maximality (Def. 9)
∙ Directionality (Def. 12)
(some) semantics properties
∙ Conflict-freeness (Def. 2)
∙ Admissibility (Def. 5)
∙ Strong-Admissibility (Def. 7)
∙ Reinstatement (Def. 8)
∙ I-Maximality (Def. 9)
no extension is a proper subset of another one
∙ Directionality (Def. 12)
(some) semantics properties
∙ Conflict-freeness (Def. 2)
∙ Admissibility (Def. 5)
∙ Strong-Admissibility (Def. 7)
∙ Reinstatement (Def. 8)
∙ I-Maximality (Def. 9)
∙ Directionality (Def. 12)
a (set of) argument(s) is affected only by its ancestors in the attack relation
You should drink milk
You should drink beer
There is no milk in the shop
and the milk you have is sour.
There is a coffee machine and
fresh coffee in the cupboard.
Beer makes you sick
You should drink coffee
There is fresh milk in your
bag because you went to the
shop earlier.
The Principal is
visiting later today, so
you had better not
You should drink milk
You should drink beer
There is no milk in the shop
and the milk you have is sour.
There is a coffee machine and
fresh coffee in the cupboard.
Beer makes you sick
You should drink coffee
There is fresh milk in your
bag because you went to the
shop earlier.
The Principal is
visiting later today, so
you had better not
b
a
c
d
f
e
gh
complete extension (def. 15)
Admissibility and reinstatement
Set of conflict-free arguments s.t. each defended argument is included
b a
c
d
f e
gh



{a, c, d, e, g},
{a, b, c, e, g},
{a, c, e, g}



grounded extension (def. 16)
Strong Admissibility
Minimum complete extension
b a
c
d
f e
gh


 {a, c, e, g}



preferred extension (def. 17)
Admissibility and maximality
Maximum complete extensions
b a
c
d
f e
gh



{a, c, d, e, g},
{a, b, c, e, g}



stable extension (def. 17)
„orror vacui:” the absence of odd-length cycles is a sufficient condition for existence of
stable extensions
Complete extensions attacking all the arguments outside
b a
c
d
f e
gh



{a, c, d, e, g},
{a, b, c, e, g}



complete labellings (def. 20)
Max. UNDEC ≡ Grounded
b a
c
d
f e
gh


 {a, c, e, g}



complete labellings (def. 20)
Max. IN ≡ Preferred
b a
c
d
f e
gh



{a, c, d, e, g}



complete labellings (def. 20)
Max. IN ≡ Preferred
b a
c
d
f e
gh



{a, b, c, e, g}



complete labellings (def. 20)
No UNDEC ≡ Stable
b a
c
d
f e
gh



{a, c, d, e, g}



complete labellings (def. 20)
No UNDEC ≡ Stable
b a
c
d
f e
gh



{a, b, c, e, g}



properties of semantics
CO GR PR ST
D-conflict-free Yes Yes Yes Yes
D-admissibility Yes Yes Yes Yes
D-strongly admissibility No Yes No No
D-reinstatement Yes Yes Yes Yes
D-I-maximality No Yes Yes Yes
D-directionality Yes Yes Yes No
complexity
[DW09]
complexity
σ = CO σ = GR σ = PR σ = ST
existsσ trivial trivial trivial np-c
caσ np-c polynomial np-c np-c
saσ polynomial polynomial Πp
2 -c conp-c
verσ polynomial polynomial conp-c polynomial
neσ np-c polynomial np-c np-c
an exercise
a
b c d
e
f
g
h
i
l
m
no
p
an exercise
a
b c d
e
f
g
h
i
l
m
no
p
ECO(∆) =



{a, c},
{a, c, f},
{a, c, m},
{a, c, f, m},
{a, c, f, l},
{a, c, g, m}



an exercise
a
b c d
e
f
g
h
i
l
m
no
p
EGR(∆) =



{a, c}



an exercise
a
b c d
e
f
g
h
i
l
m
no
p
EPR(∆) =



{a, c, f, m},
{a, c, f, l},
{a, c, g, m}



an exercise
a
b c d
e
f
g
h
i
l
m
no
p
EST (∆) =






http://rull.dbai.tuwien.ac.at:8080/ASPARTIX/index.faces
skepticisms and comparisons of sets of extensions
[BG09b]
skepticisms and comparisons of sets of extensions
..GR.
CO
.
PR
.
ST
⪯S
⊕ relation
Comparing extensions individually:
E1 ⪯E
∩+ E2 iff ∀E2 ∈ E2, ∃E1 ∈ E1: E1 ⊆ E2 and E1 ⪯E
∪+ E2 iff ∀E1 ∈ E1, ∃E2 ∈ E2: E1 ⊆ E2
signatures
[Dun+14]
signatures
The signature of a semantics is the collection of all possible sets of extensions an AF can
possess under a semantics (Def. 25).
S ⊆ 2A
:
∙ ArgsS =
∪
S∈S S;
∙ PairsS = {⟨a, b⟩ | ∃S ∈ S s.t. {a, b} ⊆ S}.
• • • • •
S = { { a, d, e },
{ b, c, e },
{ a, b } }
ArgsS = {a, b, c, d, e} PairsS = {⟨a, b⟩, ⟨a, d⟩, ⟨a, e⟩, ⟨b, c⟩, ⟨b, e⟩, ⟨c, e⟩, ⟨d, e⟩}
signatures
∙ Incomparable (Def. 26): A ⊆ B iff A = B
„Maximal”
∙ Tight (Def. 27): if S ∪ {a} ̸∈ S then ∃b ∈ S s.t. ⟨a, b⟩ ̸∈ PairsS
∙ Adm-Closed (Def. 28): if ⟨a, b⟩ ∈ PairsS ∀a, b ∈ A ∪ B, A ∪ B ∈ S
Stable iff incomparable and tight
Preferred iff non-empty, incomparable and adm-closed
signatures
∙ Incomparable (Def. 26): A ⊆ B iff A = B
∙ Tight (Def. 27): if S ∪ {a} ̸∈ S then ∃b ∈ S s.t. ⟨a, b⟩ ̸∈ PairsS
if an argument does not occur in some extension there must be a reason for that
(typically a conflict)
∙ Adm-Closed (Def. 28): if ⟨a, b⟩ ∈ PairsS ∀a, b ∈ A ∪ B, A ∪ B ∈ S
Stable iff incomparable and tight
Preferred iff non-empty, incomparable and adm-closed
signatures
∙ Incomparable (Def. 26): A ⊆ B iff A = B
∙ Tight (Def. 27): if S ∪ {a} ̸∈ S then ∃b ∈ S s.t. ⟨a, b⟩ ̸∈ PairsS
∙ Adm-Closed (Def. 28): if ⟨a, b⟩ ∈ PairsS ∀a, b ∈ A ∪ B, A ∪ B ∈ S
„Admissibility”
Stable iff incomparable and tight
Preferred iff non-empty, incomparable and adm-closed
signatures
S = { { a, d, e },
{ b, c, e },
{ a, b } }
incomparable and adm-closed (⟨a, b⟩ ∈ PairsS ∀a, b ∈ A ∪ B, A ∪ B ∈ S)
signatures
S = { { a, d, e },
{ b, c, e },
{ a, b } }
incomparable and adm-closed (⟨a, b⟩ ∈ PairsS ∀a, b ∈ A ∪ B, A ∪ B ∈ S)
a
b
c
d
f e
exercise
S = { { a, d, e },
{ b, c, e },
{ a, b, d } }
Does an AF ∆ having EPR(∆) = S exist?
exercise
S = { { a, d, e },
{ b, c, e },
{ a, b, d } }
Does an AF ∆ having EPR(∆) = S exist?
No
PairsS = {⟨a, b⟩, ⟨a, d⟩, ⟨a, e⟩, ⟨b, c⟩, ⟨b, e⟩, ⟨c, e⟩, ⟨d, e⟩, ⟨b, d⟩}
b, d ∈ { a, d, e } ∪ { a, b, d }
but { a, d, e } ∪ { a, b, d } = { a, b, d, e } /∈ S
decomposability
[Bar+14]
decomposability
AF1
AF2
AF3
Is it possible to consider a (partial) argumentation framework as a black-box and focus
only on the input/output interface?
decomposability
A semantics is:
∙ Fully decomposable (Def. 35):
∙ any combination of “local” labellings gives rise to a global labelling;
∙ any global labelling arises from a set of “local” labellings
∙ Top-Down decomposable (Def. 36):
combining “local” labellings you get all global labellings, possibly more
∙ Bottom-Up decomposable (Def. 37):
combining “local” labellings you get only global labellings, possibly less
decomposability
A semantics is:
∙ Fully decomposable (Def. 35):
∙ any combination of “local” labellings gives rise to a global labelling;
∙ any global labelling arises from a set of “local” labellings
∙ Top-Down decomposable (Def. 36):
combining “local” labellings you get all global labellings, possibly more
∙ Bottom-Up decomposable (Def. 37):
combining “local” labellings you get only global labellings, possibly less
decomposability
A semantics is:
∙ Fully decomposable (Def. 35):
∙ any combination of “local” labellings gives rise to a global labelling;
∙ any global labelling arises from a set of “local” labellings
∙ Top-Down decomposable (Def. 36):
combining “local” labellings you get all global labellings, possibly more
∙ Bottom-Up decomposable (Def. 37):
combining “local” labellings you get only global labellings, possibly less
decomposability
A semantics is:
∙ Fully decomposable (Def. 35):
∙ any combination of “local” labellings gives rise to a global labelling;
∙ any global labelling arises from a set of “local” labellings
∙ Top-Down decomposable (Def. 36):
combining “local” labellings you get all global labellings, possibly more
∙ Bottom-Up decomposable (Def. 37):
combining “local” labellings you get only global labellings, possibly less
CO ST GR PR
Full decomposability Yes Yes No No
Top-down decomposability Yes Yes Yes Yes
Bottom-up decomposability Yes Yes No No
..argumentation schemes
what is an argument?
The Argument Clinic
what is an argument?
Argumentation is a verbal,
social, and rational activity aimed
at convincing a reasonable critic of
the acceptability of a standpoint by
putting forward a constellation of
propositions justifying or refuting
the proposition expressed in the
standpoint.
Some elements of dialogue in the handout, but they
will not be considered here.
[WRM08]
practical inference: an example of argumentation scheme
Premises:
Goal Premise Bringing about Sn is my goal
Means Premise In order to bring about Sn, I need to bring about Si
Conclusions:
Therefore, I need to bring about Si.
practical inference: an example of argumentation scheme
Premises:
Goal Premise Bringing about Sn is my goal
Means Premise In order to bring about Sn, I need to bring about Si
Conclusions:
Therefore, I need to bring about Si.
Critical questions:
Other-Means Q. Are there alternative possible actions to bring about Si that
could also lead to the goal?
Best-Means Q. Is Si the best (or most favourable) of the alternatives?
Other-Goals Q. Do I have goals other than Si whose achievement is prefer-
able and that should have priority?
Possibility Q. Is it possible to bring about Si in the given circumstances?
Side Effects Q. Would bringing about Si have known bad consequences that
ought to be taken into account?
an example
Goal
Bringing about being rich is my goal I want to be rich
Means/Plan
In order to bring about being rich I need
to bring about having a job
To be rich I need a job
Action
Therefore I need to bring about having a
job
Therefore I have to search
for a job.
an example
http://ova.arg-tech.org/
with
http://homepages.abdn.ac.uk/f.cerutti/pages/research/
tutorialijcai2015/rich.html
..a semantic-web view of argumentation
[Rah+11]
Node Graph
(argument
network)
has-a
Information
Node
(I-Node)
is-a
Scheme Node
S-Node
has-a
Edge
is-a
Rule of inference
application node
(RA-Node)
Conflict application
node (CA-Node)
Preference
application node
(PA-Node)
Derived concept
application node (e.g.
defeat)
is-a
...
ContextScheme
Conflict
scheme
contained-in
Rule of inference
scheme
Logical inference
scheme
Presumptive
inference scheme
...
is-a
Logical conflict
scheme
is-a
...
Preference
scheme
Logical preference
scheme
is-a
...
Presumptive
preference scheme
is-a
uses uses uses
AIF-OWL
[Bex+13]
On-line analysis of Moral Maze
http://www.arg-tech.org/AIFdb/argview/4879
http://toast.arg-tech.org/
..abstract argumentation frameworks
Value Based AF
Extended AF AFRA
Bipolar AF
value based argumentation framework
[BA09]
value based argumentation framework
..a2
LC, FC
.. a3
LC, FH
..
a1
LC
a1 Hal should not take insulin, thus
allowing Carla to be alive (value of
Life for Carla LC);
a2 Hal should take insulin and
compensate Carla, thus both of
them stay alive (value of Life for
Carla, and the Freedom — of using
money — for Carla FC);
a3 Hal should take insulin and that
Carla should buy insulin, thus
both of them stay alive (value of
Life for Carla, and the Freedom —
of using money — for Hal FH).
Value Based AF
Extended AF AFRA
Bipolar AF
extended argumentation framework
[Mod09]
extended argumentation framework
a1 “Today will be dry in London since
the BBC forecast sunshine”;
a2 “Today will be wet in London since
CNN forecast rain”;
a3 “But the BBC are more trustworthy
than CNN”;
a4 “However, statistically CNN are
more accurate forecasters than
the BBC”;
a5 “Basing a comparison on statistics
is more rigorous and rational than
basing a comparison on your
instincts about their relative
trustworthiness”.
Value Based AF
Extended AF AFRA
Bipolar AF
afra: argumentation framework with recursive attacks
[Bar+11]
afra: argumentation framework with recursive attacks
a1 There is a last minute offer for
Gstaad: therefore I should go to
Gstaad;
a2 There is a last minute offer for
Cuba: therefore I should go to
Cuba;
a3 I do like to ski;
a4 The weather report informs that in
Gstaad there were no snowfalls
since one month: therefore it is
not possible to ski in Gstaad;
a5 It is anyway possible to ski in
Gstaad, thanks to a good amount
of artificial snow.
Value Based AF
Extended AF AFRA
Bipolar AF
bipolar argumentation framework
[CL05]
bipolar argumentation framework
..a3. a2. a1.
a4
a1 in favour of m, with premises
{s, f, (s ∧ f) → m};
a2 in favour of ¬s, with premises
{w, w → ¬s};
a3 in favour of ¬w, with premises
{b, b → ¬w};
a4 in favour of f, with premises
{l, l → f}
m Mary (who is small) is the killer
f the killer is female
s the killer is small
w a witness says that the killer is tall
b the witness is short-sighted
l the killer has long hair and wear
lipstick
..structured argumentation frameworks
DeLP ABA
ASPIC+
Deductive
Argumentation
Logic for Clinical
Knowledge
delp: defeasible logic programming
[SL92] [GS14]
delp: defeasible logic programming
Π non-defeasible knowledge ⟨Π, ∆⟩ ∆ defeasible knowledge
facts i.e. atomic information
strict rules Lo ←− L1, . . . , Ln defeasible rules Lo −< L1, . . . , Ln
Def. 40
Let H be a ground literal: ⟨A, H⟩ is an argument structure if:
∙ there exists a defeasible derivation*
for H from ⟨Π, A⟩;
∙ there are no defeasible derivations from ⟨Π, A⟩ of contradictory literals;
∙ and there is no proper subset A′
⊂ A such that A′
satisfies (1) and (2).
*A defeasible derivation for Q from ⟨Π, ∆⟩, is L1, L2, . . . , Ln = Q s.t.: (i) Li is a fact; or (ii) ∃Ri ∈ ⟨Π, ∆⟩ with
head Li and body B1, . . . , Bk, and every literal of the body is an element Lj of the sequence with j < i.
delp: defeasible logic programming
Def. 41
⟨B, S⟩ is a counter-argument for ⟨A, H⟩ at literal P, if there exists a sub-argument ⟨C, P⟩ of
⟨A, H⟩ such that P and S disagree, that is, there exist two contradictory literals that have a
strict derivation from Π ∪ {S, P}. The literal P is referred as the counter-argument point
and ⟨C, P⟩ as the disagreement sub-argument.
Def. 42
Let ⟨B, S⟩ be a counter-argument for ⟨A, H⟩ at point P, and ⟨C, P⟩ the disagreement
sub-argument.
If ⟨B, S⟩ ≻*
⟨C, P⟩, then ⟨B, S⟩ is a proper defeater for ⟨A, H⟩.
If ⟨B, S⟩ ⊁ ⟨C, P⟩ and ⟨C, P⟩ ⊁ ⟨B, S⟩, then ⟨B, S⟩ is a blocking defeater for ⟨A, H⟩.
⟨B, S⟩ is a defeater for ⟨A, H⟩ if ⟨B, S⟩ is either a proper or blocking defeater for ⟨A, H⟩.
*≻ is an argument comparison criterion.
delp: defeasible logic programming
Π1 ∆1



cloudy
dry_season
waves
vacation
¬working ←− vacation






surf −< nice, spare_time
nice −< waves
spare_time −< ¬busy
¬busy −< ¬working
¬nice −< rain
rain −< cloudy
¬rain −< dry_season



delp: defeasible logic programming
Π1



cloudy
dry_season
waves
vacation
¬working ←− vacation



A0 =



surf −< nice, spare_time
nice −< waves
spare_time −< ¬busy
¬busy −< ¬working



A1 = {¬nice −< rain; rain −< cloudy}
A2 = {nice −< waves}
A3 = {rain −< cloudy}
A4 = {¬rain −< dry_season}
DeLP ABA
ASPIC+
Deductive
Argumentation
Logic for Clinical
Knowledge
assumption based argumentation framework
[Bon+97] [Ton14]
assumption based argumentation framework
⟨L, R, A, ⟩
L R A ⊆ L : A → L
language set of rules assumptions contrariness
Def. 45
An argument for the claim σ ∈ L supported by A ⊆ A (A ⊢ σ) is a deduction for σ
supported by A (and some R ⊆ R).*
Def. 46
An argument A1 ⊢ σ1 attacks an argument A2 ⊢ σ2 iff σ1 is the contrary of one of the
assumptions in A2.
*A (finite) tree with nodes labelled by sentences in L or by τ /∈ L, the root labelled by σ, leaves either τ
or sentences in A, non-leaves σ′ with, as children, the elements of the body of some rule in R with head σ′.
assumption based argumentation framework
R = { innocent(X) ←− notGuilty(X);
killer(oj) ←− DNAshows(oj), DNAshows(X) ⊃ killer(X);
DNAshows(X) ⊃ killer(X) ←− DNAfromReliableEvidence(X);
evidenceUnreliable(X) ←− collected(X, Y), racist(Y);
DNAshows(oj) ←−;
collected(oj, mary) ←−;
racist(mary) ←− }
A = { notGuilty(oj);
DNAfromReliableEvidence(oj) }
notGuilty(oj) = killer(oj),
DNAfromReliableEvidence(oj) = evidenceUnreliale(oj).
assumption based argumentation framework
DeLP ABA
ASPIC+
Deductive
Argumentation
Logic for Clinical
Knowledge
aspic+
[Pra10] [MP13] [MP14]
aspic+
Def. 47
An argumentation system is as tuple AS = ⟨L, R, ν⟩ where:
∙ : L → 2L
: a contrariness function s.t. if φ ∈ ψ and:
∙ ψ /∈ φ, then φ is a contrary of ψ;
∙ ψ ∈ φ, then φ is a contradictory of ψ (φ = –ψ);
∙ R = Rd ∪ Rs: strict (Rs) and defeasible (Rd) inference rules s.t. Rd ∩ Rs = ∅;
∙ ν : Rd → L, is a partial function.*
P ⊆ L is consistent iff ∄φ, ψ ∈ P s.t. φ ̸∈ ψ, otherwise is inconsistent.
A knowledge base in an AS is Kn ∪ Kp = K ⊆ L; {Kn, Kp} is a partition of K; Kn contains
axioms that cannot be attacked; Kp contains ordinary premises that can be attacked.
An argumentation theory is a pair AT = ⟨AS, K⟩.
*Informally, ν(r) is a wff in L which says that the defeasible rule r is applicable.
aspic+
Def. 48
An argument a on the basis of a AT = ⟨AS, K⟩, AS = ⟨L, R, ν⟩ is:
1. φ if φ ∈ K with: Prem(a) = {φ}; Conc(a) = φ; Sub(a) = {φ};
Rules(a) = DefRules(a) = ∅; TopRule(a) = undefined.
2. a1, . . . , an −→ / =⇒ ψ if a1, . . . , an, with n ≥ 0, are arguments such that there exists a
strict/defeasible rule r = Conc(a1), . . . , Conc(an) −→ / =⇒ ψ ∈ Rs/Rd.
Prem(a) =
∪n
i=1 Prem(ai); Conc(a) = ψ;
Sub(a) =
∪n
i=1 Sub(ai) ∪ {a};
Rules(a) =
∪n
i=1 Rules(ai) ∪ {r};
DefRules(a) = {d | d ∈ Rules(a) ∩ Rd};
TopRule(a) = r
a is strict if DefRules(a) = ∅, otherwise defeasible; firm if Prem(a) ⊆ Kn, otherwise
plausible.
aspic+
Def. 49
Given a and b arguments, a defeats b iff a undercuts, successfully rebuts or successfully
undermines b, where:
∙ a undercuts b (on b′
) iff Conc(a) /∈ ν(r) for some b′
∈ Sub(b) s.t.
r = TopRule(b′
) ∈ Rd;
∙ a successfully rebuts b (on b′
) iff Conc(a) /∈ φ for some b′
∈ Sub(b) of the form
b′′
1 , . . . , b′′
n =⇒ –φ, and a b′
;
∙ a successfully undermines b (on φ) iff Conc(a) /∈ φ, and φ ∈ Prem(b) ∩ Kp, and
a φ.
Def. 50
AF is the abstract argumentation framework defined by AT = ⟨AS, K⟩ if A is the smallest
set of all finite arguments constructed from K; and → is the defeat relation on A.
aspic+
Rationality postulates
P1: direct consistency iff
{Conc(a) | a ∈ S} is
consistent;
P2: indirect consistency iff
Cl({Conc(a) | a ∈ S}) is
consistent;
P3: closure iff {Conc(a) | a ∈ S} =
Cl({Conc(a) | a ∈ S});
P4: sub-argument closure iff
∀a ∈ S, Sub(a) ⊆ S.
∙ close under transposition
If φ1, . . . , φn −→ ψ ∈ Rs, then ∀i = 1 . . . n,
φ1, . . . , φi−1, ¬ψ, φi+1, . . . , φn =⇒ ¬φi ∈ Rs.
∙ Cl(Kn) is consistent;
∙ the argument ordering ⪯ is reasonable, namely:
∙ ∀a, b, if a is strict and firm, and b is plausible or
defeasible, then a b;
∙ ∀a, b, if b is strict and firm, then b a;
∙ ∀a, a′
, b such that a′
is a strict continuation of
{a}, if a b then a′
b, and if b a, then
b a′
;
∙ given a finite set of arguments {a1, . . . , an}, let
a+i
be some strict continuation of
{a1, . . . , ai−1, ai+1, . . . , an}. Then it is not the case
that ∀i, a+i
ai.
aspic+
Kp = { Snores;
Professor }
Rd = { Snores =⇒d1
Misbehaves;
Misbehaves =⇒d2
AccessDenied;
Professor =⇒d3
AccessAllowed }
AccesAllowed = −AccessDenied
Snores <′
Professor; d1 < d2; d1 < d3; d3 < d2.
aspic+
aspic+
http://toast.arg-tech.org/4214
DeLP ABA
ASPIC+
Deductive
Argumentation
Logic for Clinical
Knowledge
deductive argumentation
[BH01] [GH11] [BH14]
deductive argumentation
Def. 53
A deductive argument is an ordered pair ⟨Φ, α⟩ where Φ ⊢i α. Φ is the support, or
premises, or assumptions of the argument, and α is the claim, or conclusion, of the
argument.
consistency constraint when Φ is consistent (not essential, cf. paraconsistent logic).
minimality constraint when there is no Ψ ⊂ Φ such that Ψ ⊢ α
Def. 56
If ⟨Φ, α⟩ and ⟨Ψ, β⟩ are arguments, then
∙ ⟨Φ, α⟩ rebuts ⟨Ψ, β⟩ iff α ⊢ ¬β
∙ ⟨Φ, α⟩ undercuts ⟨Ψ, β⟩ iff α ⊢ ¬ ∧ Ψ
deductive argumentation
Def. 55
A classical logic argument from a set of formulae ∆ is a pair ⟨Φ, α⟩ such that
Φ ⊆ ∆ Φ ̸⊢ ⊥ Φ ⊢ α there is no Φ′
⊂ Φ such that Φ′
⊢ α.
Def. 57
Let a and b be two classical arguments. We define the following types of classical attack.
a is a direct undercut of b if ¬Claim(a) ∈ Support(b)
a is a classical defeater of b if Claim(a) ⊢ ¬
∧
Support(b)
a is a classical direct defeater of b if ∃ϕ ∈ Support(b) s.t. Claim(a) ⊢ ¬ϕ
a is a classical undercut of b if ∃Ψ ⊆ Support(b) s.t. Claim(a) ≡ ¬
∧
Ψ
a is a classical direct undercut of b if ∃ϕ ∈ Support(b) s.t. Claim(a) ≡ ¬ϕ
a is a classical canonical undercut of b if Claim(a) ≡ ¬
∧
Support(b).
a is a classical rebuttal of b if Claim(a) ≡ ¬Claim(b).
a is a classical defeating rebuttal of b if Claim(a) ⊢ ¬Claim(b).
deductive argumentation
..
bp(high)
ok(diuretic)
bp(high) ∧ ok(diuretic) → give(diuretic)
¬ok(diuretic) ∨ ¬ok(betablocker)
give(diuretic) ∧ ¬ok(betablocker)
.
bp(high)
ok(betablocker)
bp(high) ∧ ok(betablocker) → give(betablocker)
¬ok(diuretic) ∨ ¬ok(betablocker)
give(betablocker) ∧ ¬ok(diuretic)
.
symptom(emphysema),
symptom(emphysema) → ¬ok(betablocker)
¬ok(betablocker)
...
DeLP ABA
ASPIC+
Deductive
Argumentation
Logic for Clinical
Knowledge
a logic for clinical knowledge
[HW12] [Wil+15]
a logic for clinical knowledge
Def. 58
Given treatments τ1 and τ2, X ⊆ evidence, there are three kinds of inductive argument:
1. ⟨X, τ1 > τ2⟩: evidence in X supports the claim that treatment τ1 is superior to τ2.
2. ⟨X, τ1 ∼ τ2⟩: evidence in X supports the claim that treatment τ1 is equivalent to τ2
3. ⟨X, τ1 < τ2⟩: evidence in X supports the claim that treatment τ1 is inferior to τ2.
Def. 59
If the claim of argument ai is ϵi and the claim of argument aj is ϵj then ai conflicts with aj
whenever:
1. ϵi = τ1 > τ2, and ( ϵj = τ1 ∼ τ2 or ϵj = τ1 < τ2 ).
2. ϵi = τ1 ∼ τ2, and ( ϵj = τ1 > τ2 or ϵj = τ1 < τ2 ).
3. ϵi = τ1 < τ2, and ( ϵj = τ1 > τ2 or ϵj = τ1 ∼ τ2 ).
a logic for clinical knowledge
Def. 60
For any pair of arguments ai and aj, and a preference relation R, ai attacks aj with respect
to R iff ai conflicts with aj and it is not the case that aj is strictly preferred to ai according
to R.
A domain-specific benefit preference relation is defined in [HW12]
Def. 61 (Meta arguments)
For a ∈ Arg(evidence), if there is an e ∈ support(a) such that:
∙ e is not statistically significant, and e is not a side-effect, then this is an attacker:
⟨Not statistically significant⟩;
∙ e is a non-randomised and non-blind trial, then this is an attacker:
⟨Non-randomized & non-blind trials⟩;
∙ e is a meta-analysis that concerns a narrow patient group, then this is an attacker:
⟨Meta-analysis for a narrow patient group⟩.
a logic for clinical knowledge
ID Left Right Indicator Risk ratio Outcome p
e1 CP*
NT†
Pregnancy 0.05 superior 0.01
e2 CP NT Ovarian cancer 0.99 superior 0.07
e3 CP NT Breast cancer 1.04 inferior 0.01
e4 CP NT DVT 1.02 inferior 0.05
N.B.: Fictional data.
*Contraceptive pill.
†No Treatment.
a logic for clinical knowledge
..
⟨{e1}, CP > NT⟩
.
⟨{e2}, CP > NT⟩
. ⟨{e1, e2}, CP > NT⟩.
⟨{e3}, CP < NT⟩
.
⟨{e4}, CP < NT⟩
. ⟨{e3, e4}, CP < NT⟩.
⟨Notstatistically
significant⟩
ID Left Right Indicator Risk ratio Outcome p
e1 CP NT Pregnancy 0.05 superior 0.01
e2 CP NT Ovarian cancer 0.99 superior 0.07
e3 CP NT Breast cancer 1.04 inferior 0.01
e4 CP NT DVT 1.02 inferior 0.05
..probabilistic argumentation frameworks
epistemic approach
[Thi12] [Hun13]
epistemic approach
[HT14] [BGV14]
epistemic approach
An epistemic probability distribution*
for an argumentation framework ∆ = ⟨A, → ⟩ is:
P : A → [0, 1]
Def. 65
For an argumentation framework AF = ⟨A, →⟩ and a probability assignment P, the
epistemic extension is
{a ∈ A | P(a) > 0.5}
*In the tutorial a way to compute it for arguments based on classical deduction.
epistemic approach
COH: P is coherent if for every a, b ∈ A, if a attacks b then P(a) ≤ 1 − P(b).
SFOU: P is semi-founded if P(a) ≥ 0.5 for every unattacked a ∈ A.
FOU: P is founded if P(a) = 1 for every unattacked a ∈ A.
SOPT: P is semi-optimistic if P(a) ≥ 1 −
∑
b∈a− P(b) for every a ∈ A with at least one attacker.
OPT: P is optimistic if P(a) ≥ 1 −
∑
b∈a− P(b) for every a ∈ A.
JUS: P is justifiableif P is coherent and optimistic.
TER: P is ternary if P(a) ∈ {0, 0.5, 1} for every a ∈ A.
RAT: P is rational if for every a, b ∈ A, if a attacks b then P(a) > 0.5 implies P(b) ≤ 0.5.
NEU: P is neutral if P(a) = 0.5 for every a ∈ A.
INV: P is involutary if for every a, b ∈ A, if a attacks b, then P(a) = 1 − P(b).
Let the event “a is accepted” be denoted as a, and let be Eac(S) = {a|a ∈ S}. Then P is weakly
p-justifiable iff ∀a ∈ A, ∀b ∈ a−
, P(a) ≤ 1 − P(b).
epistemic approach
Def. 67
Restriction on complete*
probability function P Classical semantics
No restriction complete extensions
No arguments a such that P(a) = 0.5 stable
Maximal no. of a such that P(a) = 1 preferred
Maximal no. of a such that P(a) = 0 preferred
Maximal no. of a such that P(a) = 0.5 grounded
Minimal no. of a such that P(a) = 1 grounded
Minimal no. of a such that P(a) = 0 grounded
Minimal no. of a such that P(a) = 0.5 semi-stable
*Coherent, founded, and ternary. http://arxiv.org/abs/1405.3376
structural approach
[Hun14]
structural approach
P : {∆′
⊑ ∆} → [0, 1]
Subframework Probability
∆1 a ↔ b 0.09
∆2 a 0.81
∆3 b 0.01
∆4 0.09
PGR({a, b}) = = 0.00
PGR({a}) = P(∆2) = 0.81
PGR({b}) = P(∆3) = 0.01
PGR({}) = P(∆1) + P(∆4) = 0.18
a computational framework
[Li15]
a computational framework
Convert to
ASPIC+Argumentation
System
•Logical Language
•Inference Rules
•Contrariness Function
•......
Structured
Argumentation
Framework
(SAF)
DAF
DAFEAF
Extended
Evidential
Framework
(EEAF)
Probabilistic
Extended
Evidential
Framework
Convert to
Convert to
Extended
Evidential
Framework
(EEAF)
Model
Probabilistic
Extended
Evidential
Framework
Associate
Probabilities
Convert toPrEAF
Associate
Probabilities
Semantics
Preserved
PrAF
Associate
Probabilities
..cispaces
[Ton+15]
CISpaces
What is the cause of the illness?
Analyst Joe
?
Illness among peopleLivestock illness
Possible
Connection
CONTAMINATED
WATER SUPPLY
Is this information credible?
Are there alternative explanations?
Is there evidence for the
contamination of the water supply?
Analysts must reason with different types of evidence
research foci
Attributes of the problem domain
∙ Intelligence analysis is critical for making well-informed decisions
∙ Large amount of conflicting incomplete information
∙ Reasoning with different types of evidence
Research Question
How to develop agents to support to reasoning with different types of evidence in a
combined approach throughout the process of analysis?
intelligence analysis
Def. 73
The application of individual and collective cognitive methods to evaluate, integrate,
interpret information about situations and events to provide warning for potential
threats or identify opportunities.
External
Data
Sources
Presentation
Search
and Filter
Schematize
Build Case
Tell Story
Reevaluate
Search
for support
Search
for evidence
Search for
information
FORAGING LOOP
SENSE-MAKING LOOP
Structure
Effort
inf
Shoebox
Ev
Ev
EvEv Ev
Ev
Ev
Ev
Ev
Ev
Ev
Evidence File
Hyp1 Hyp2
Hypotheses
Pirolli & Card Model
Effective if:
TIMELY
TARGETED
and TRUSTED
collaboration among analysts
Team of Analysts: More effective, Prevent Bias, Different Expertise and Resources
Challenges at
different stages
of analysis
Schematize Build Case
Search
for support
Search
for evidence
Shoebox
Evidence File
Hyp1 Hyp2
Hypotheses
Share data and analysis
Integrate and annotate
Assess credibility
inf
inf
inf
Gather information Identify Plausible Hypotheses
Mitigate Cognitive Biases
cispaces agent support
Interface
Communication layer
ToolBox
WorkBoxInfoBox ReqBox
ChatBox
Sensemaking
Agent
Crowd-sourcing
Agent
Provenance
Agent
analyst
CISpaces Interface: Working space and access to agent support
Sensemaking Agent: Support collaborative analysis of arguments
Crowd-sourcing Agent: Enable participation of large groups of contributors
Provenance Agent: Assess the credibility of information
Interface
Communication layer
ToolBox
WorkBoxInfoBox ReqBox
ChatBox
Sensemaking
Agent
Crowd-sourcing
Agent
Provenance
Agent
analyst
Provenance
Agent
Sensemaking
Agent
sensemaking agent (smag) - analysis construction
∙ Annotation of Pro links;
∙ Suggests CQs (Con links) to prevent cognitive biases.
Causal – Distribution of Activities
∙ Typically, if C occurs, then E
will occur
∙ In this case, C occurs
⇒ Therefore, in this case E will
occur
Association – Element Connections
∙ An activity occurs, and an entity may be
involved
∙ To perform the activity some property H
is required
∙ The entity fits the property H
⇒ Therefore, the entity is associated with
the activity
smag analysis construction (cont.)
E is an expert in D
E asserts A in D
A is true
Lab Expert on
water toxins and
chemicals asserts
There is a bacteria
contaminating the
water supply
Water supply in
Kish is
contaminated
Pro
Expert Opinion
Cause to effect
Identification...
V
Analyst annotates pro links and
nodes → match to an argument
scheme.
E is an expert in D
Lab Expert on
water toxins and
chemicals asserts
There is a bacteria
contaminating the
water supply
Water supply in
Kish is
contaminated
Expert
Opinion
E asserts A in D
A is true
∙ E is an expert in domain D containing A,
∙ E asserts that A is true,
⇒ Therefore, A may plausibly be true.
E is an expert in D
Lab Expert on
water toxins and
chemicals asserts
CQ1: Is E an expert in D?
CQ2: Is E reliable?V
Con
CQ2
The expert is not
reliable
Analyst select CQs. CISpaces shows
a negative answer to a CQ to prevent
cognitive biases.
smag hypotheses identification
controversial standpoints as extensions
1. Transforming the current workbox view into an argumentation framework
q0,q1=>q2
PREMISE PREMISE
q0
q1
q2
q3
q4
q1,q3=>q4
Contradictory
statements:
q2-q4
CAUSE TO
EFFECT
ASPIC+ argumentation framework:
Premises: q0, q1, q3
Rules: q0, q1 ⇒ q2; q1, q3 ⇒ q4;
Negation: q2 − q4
Arguments:
A0 : q0, A1 : q1, A2 : q3
A4 : A0, A1 ⇒ q2
A5 : A0, A2 ⇒ q4
2. ASPIC+
/Dung’s AF implementation identifies the sets of acceptable arguments:
smag hypotheses identification (cont.)
3. CISpaces shows what conclusions can be supported
∙ Labelled according to extensions computed
∙ Arguments shared through the Argument Interchange Format (AIF)
Unidentified
illness affects
the local
livestock in
Kish
Non-
waterborne
bacteria were
engineered and
released in the
water supply
Illness among
young and elderly
people in Kish
caused by bacteria
Toxic
Bacteria
contaminate
the local water
system in Kish
NGO Lab
reports
examined the
contamination
NON-
waterborne
bacteria
contaminate the
water supply
There are
bacteria in the
water supply
Waterborne
bacteria
contaminate
the water
supply
V
V
V
V
VWaterborne
bacteria have
formed by a
sewage leakage
in the water
supply pipes
V
Interface
Communication layer
ToolBox
WorkBoxInfoBox ReqBox
ChatBox
Sensemaking
Agent
Crowd-sourcing
Agent
Provenance
Agent
analyst
Provenance
Agent
Crowd-sourcing
Agent
crowdsourcing agent
1. Critical questions trigger the need for further information on a topic
2. Analyst call the crowdsourcing agent (CWSAg)
3. CWSAg distributes the query to a large group of contributors
4. CWSAg aggregates the results and shows statistics to the analyst
cwsag results import
Q0-Answer
Clear (Con)
Q1-Answer
21.1 (Pro)
Q0-AGAINST
Water Contaminated
Q1-FOR
Water Contaminated
CONTRADICTORY
Interface
Communication layer
ToolBox
WorkBoxInfoBox ReqBox
ChatBox
Sensemaking
Agent
Crowd-sourcing
Agent
Provenance
Agent
analyst
Provenance
Agent
Provenance
Agent
N
S
E
W
image info ij
observation
Observer Messenger Informer
message
info ik
Gang
heading
South
Gang
Crossing
North Border
N
S
E
W
Surveillance
BORDER L1-L2
Image
Processing
Analyst Joe
BORDER L1-L2
GP(ij)
GP(ik)
argument from provenance
- Given a provenance chain GP(ij) of ij, information ij:
- (Where?) was derived from an entity A
- (Who?) was associated with actor AG
- (What?) was generated by activity P1
- (How?) was informed by activity P2
- (Why?) was generated to satisfy goal X
- (When?) was generated at time T
- (Which?) was generated by using some entities A1,…, AN
- where A, AG, P1, …belong to GP(ij)
- the stated elements of GP(ij) infer that information ij is true,
⇒ Therefore, information ij may plausibly be taken to be true.
CQA1: Is ij consistent with other information?
CQA2: Is ij supported by evidence?
CQA3: Does GP(ij) contain other elements that lead us not to believe ij?
CQA4: Are there provenance elements that should have been included for believing ij?
argument for provenance preference
- Given information ij and ik,
- and their known parts of the provenance chains GP(ij) and GP(ik),
- if there exists a criterion Ctr such that GP(ij) ≪Ctr GP(ik), then ij ≪ ik
- a criterion Ctr′
leads to assert that GP(ij) ≪Ctr′ GP(ik)
⇒ Therefore, ik should be preferred to ij.
Trustworthiness Reliability Timeliness Shortest path
CQB1: Does a different criterion Ctr1, such that GP(ij) ≫Ctr1
GP(ik) lead ij ≪ ik not being valid?
CQB2: Is there any exception to criterion Ctr such that even if a provenance chain GP(ik) is preferred to
GP(ij), information ik is not preferred to information ij?
CQB3: Is there any other reason for believing that the preference ij ≪ ik is not valid?
pvag provenance analysis & import
IMPORT ANALYSIS
Primary Source Pattern
Provenance
Explanation
US Patrol
Report
Extract
Used wasGeneratedBy
US Team
Patrol
wasAssociatedWith
wasDerivedFrom
INFO:
Livestock
illness
prov: time
2015-04-27T02:27:40Z
Farm Daily
Report
Prepare
Used wasGeneratedBy
Kish
Farmer
wasAssociatedWith
wasDerivedFrom
type PrimarySource
Annotate
wasGeneratedBy
wasAssociatedWith
Livestock
Pictures
Used
Livestock
Information
IMPORT OF
PREFERENCES?
theories/technologies integrated
∙ Argument representation:
∙ Argument Schemes and Critical questions (domain specific)
∙ „Bipolar-like” graph for user consumption
∙ AIF (extension for provenance)
∙ ASPIC(+)
∙ Arguments based on preferences (partially under development)
∙ Theoretical framework for acceptability status:
∙ AF
∙ PrAF (case study for [Li15])
∙ AFRA for preference handling (under development)
∙ Computational machinery: jArgSemSAT
..algorithms and implementations
[Cha+15]
ad-hoc procedures
ArgTools
[NAD14]
ad-hoc procedures
csp-based approach
ConArg
[BS12]
csp-based approach
A Constraint Satisfaction Problem (CSP) P is a triple P = ⟨X, D, C⟩ such that:
∙ X = ⟨x1, . . . , xn⟩ is a tuple of variables;
∙ D = ⟨D1, . . . , Dn⟩ a tuple of domains such that ∀i, xi ∈ Di;
∙ C = ⟨C1, . . . , Ct⟩ is a tuple of constraints, where ∀j, Cj = ⟨RSj
, Sj⟩,
Sj ⊆ {xi|xi is a variable}, RSj
⊆ SD
j × SD
j where SD
j = {Di|Di is a
domain, and xi ∈ Sj}.
A solution to the CSP P is A = ⟨a1, . . . , an⟩ where ∀i, ai ∈ Di and ∀j, RSj
holds on the
projection of A onto the scope Sj. If the set of solutions is empty, the CSP is unsatisfiable.
csp-based approach
Given an AF:
1. create a variable for each argument whose domain is always {0, 1} — ∀ai ∈ A, ∃xi ∈ X
such that Di = {0, 1};
2. describe constraints associated to different definitions of Dung’s argumentation
framework: e.g.
{a1, a2} ⊆ A is D-conflict-free iff ¬(x1 = 1 ∧ x2 = 1);
3. solve the CSP problem.
asp-based approach
ASPARTIX-D / ASPARTIX-V / DIAMOND
[EGW10] [Dvo+11]
asp-based approach
πST = { in(X) ← not out(X), arg(X);
out(X) ← not in(X), arg(X);
← in(X), in(Y), defeat(X, Y);
defeated(X) ← in(Y), defeat(Y, X);
← out(X), not defeated(X)}.
Tests for subset-maximality exploit the metasp optimisation frontend for the
ASP-package gringo/claspD.
sat-based approaches
Cegartix
[Dvo+12]
ArgSemSAT/jArgSemSAT/LabSATSolver
[Cer+14b]
sat-based approaches
[Dvo+12]
∧
a→b
(¬xa ∨ ¬xb)∧
∧
b→c

¬xc ∨
∨
a→b
xa


sat-based approaches
[Cer+14b]
If a1 is not attacked, Lab(a1) = in
C1
Lab(a1) = in ⇔ ∀a2 ∈ a−
1 Lab(a2) = out
Lab(a1) = out ⇔ ∃a2 ∈ a−
1 : Lab(a2) = in
Lab(a1) = undec ⇔ ∀a2 ∈ a−
1 Lab(a2) ̸= in ∧ ∃a3 ∈ a−
1 :
Lab(a3) = undec
sat-based approaches
[Cer+14b]
∧
i∈{1,...,k}
(
(Ii ∨ Oi ∨ Ui) ∧ (¬Ii ∨ ¬Oi)∧(¬Ii ∨ ¬Ui) ∧ (¬Oi ∨ ¬Ui)
)
∧
∧
{i|ϕ(i)−=∅}
(Ii ∧ ¬Oi ∧ ¬Ui) ∧
∧
{i|ϕ(i)−̸=∅}


Ii ∨



∨
{j|ϕ(j)→ϕ(i)}
(¬Oj)





 ∧
∧
{i|ϕ(i)−̸=∅}



∧
{j|ϕ(j)→ϕ(i)}
¬Ii ∨ Oj


 ∧
∧
{i|ϕ(i)−̸=∅}



∧
{j|ϕ(j)→ϕ(i)}
¬Ij ∨ Oi


 ∧
∧
{i|ϕ(i)−̸=∅}


¬Oi ∨



∨
{j|ϕ(j)→ϕ(i)}
Ij





 ∧
∧
{i|ϕ(i)−̸=∅}



∧
{k|ϕ(k)→ϕ(i)}


Ui ∨ ¬Uk ∨



∨
{j|ϕ(j)→ϕ(i)}
Ij








 ∧
∧
{i|ϕ(i)−̸=∅}






∧
{j|ϕ(j)→ϕ(i)}
(¬Ui ∨ ¬Ij)


 ∧


¬Ui ∨



∨
{j|ϕ(j)→ϕ(i)}
Uj









sat-based approaches
[Cer+14b]
If a1 is not attacked, Lab(a1) = in
Ca
1
Lab(a1) = in ⇔ ∀a2 ∈ a−
1 Lab(a2) = out
Lab(a1) = out ⇔ ∃a2 ∈ a−
1 : Lab(a2) = in
sat-based approaches
[Cer+14b]
If a1 is not attacked, Lab(a1) = in
Cb
1
Lab(a1) = out ⇔ ∃a2 ∈ a−
1 : Lab(a2) = in
Lab(a1) = undec ⇔ ∀a2 ∈ a−
1 Lab(a2) ̸= in ∧ ∃a3 ∈ a−
1 :
Lab(a3) = undec
sat-based approaches
[Cer+14b]
If a1 is not attacked, Lab(a1) = in
Cc
1
Lab(a1) = in ⇔ ∀a2 ∈ a−
1 Lab(a2) = out
Lab(a1) = undec ⇔ ∀a2 ∈ a−
1 Lab(a2) ̸= in ∧ ∃a3 ∈ a−
1 :
Lab(a3) = undec
sat-based approaches
[Cer+14b]
If a1 is not attacked, Lab(a1) = in
C2
Lab(a1) = in ⇒ ∀a2 ∈ a−
1 Lab(a2) = out
Lab(a1) = out ⇒ ∃a2 ∈ a−
1 : Lab(a2) = in
Lab(a1) = undec ⇒ ∀a2 ∈ a−
1 Lab(a2) ̸= in ∧ ∃a3 ∈ a−
1 :
Lab(a3) = undec
sat-based approaches
[Cer+14b]
If a1 is not attacked, Lab(a1) = in
C3
Lab(a1) = in ⇐ ∀a2 ∈ a−
1 Lab(a2) = out
Lab(a1) = out ⇐ ∃a2 ∈ a−
1 : Lab(a2) = in
Lab(a1) = undec ⇐ ∀a2 ∈ a−
1 Lab(a2) ̸= in ∧ ∃a3 ∈ a−
1 :
Lab(a3) = undec
sat-based approaches
[Cer+14b]
50
60
70
80
90
100
50 100 150 200IPCnormalisedto100
Number of arguments
IPC normalised to 100 with respect to the number of arguments
C1
Ca
1
Cb
1
Cc
1
C2
C3
iccma 2015
The First International Competition on Computational Models of Argumentation
http://argumentationcompetition.org/
Results announced yesterday
iccma 2015
iccma 2015
iccma 2015
a parallel algorithm
[BGG05] [Cer+14a] [Cer+15]
a parallel algorithm
a parallel algorithm
a parallel algorithm
a b
e f
c d g h
a parallel algorithm
a b
e f
c d g h
a parallel algorithm
a b
e f
c d g h
Level 1 Level 2
a parallel algorithm
a b
e f
c d g h
Level 1 Level 2



⟨{a, c, e}, {b, d, f}, {}⟩,
⟨{a, c, f}, {b, d, e}, {}⟩,
⟨{a, d, e}, {b, c, f}, {}⟩,
⟨{a, d, f}, {b, c, e}, {}⟩



a parallel algorithm
a b
e f
c d g h
Level 1 Level 2
Moving to the the last level,
B1: no argument in S3 is attacked from “outside” for Lab ∈
{
⟨{a, c, e}, {b, d, f}, {}⟩,
⟨{a, c, f}, {b, d, e}, {}⟩
}
B2: g is attacked by d Lab ∈
{
⟨{a, d, e}, {b, c, f}, {}⟩,
⟨{a, d, f}, {b, c, e}, {}⟩
}
Cases B1 and B2 are computed in parallel.
a parallel algorithm
a b
e f
c d g h
Level 1 Level 2



⟨{a, c, e, g}, {b, d, f, h}, {}⟩,
⟨{a, c, e, h}, {b, d, f, g}, {}⟩,
⟨{a, c, f, g}, {b, d, e, h}, {}⟩,
⟨{a, c, f, h}, {b, d, e, g}, {}⟩,
⟨{a, d, e, h}, {b, c, f, g}, {}⟩,
⟨{a, d, f, h}, {b, c, e, g}, {}⟩



We need to be smart
Holger H. Hoos, Invited Keynote Talk at ECAI2014
[VCG14] [CGV14]
features from an argumentation graph
Directed Graph (26 features)
Structure:
# vertices ( |A| )
# edges ( | → | )
# vertices / #edges ( |A|/| → | )
# edges / #vertices ( | → |/|A| )
density
average
Degree: stdev
attackers max
min
#
average
stdev
max
SCCs:
min
Structure:
# self-def
# unattacked
flow hierarchy
Eulerian
aperiodic
CPU-time: …
Undirected Graph (24 features)
Structure:
# edges
# vertices / #edges
# edges / #vertices
density
Degree:
average
stdev
max
min
SCCs:
#
average
stdev
max
min
Structure: Transitivity
3-cycles:
#
average
stdev
max
min
CPU-time: …
how hard is to get the features?
Direct Graph Features (DG) Undirect Graph Features (UG)
Class CPU-Time # feat Class CPU-Time # feat
Mean stdDev Mean stDev
Graph Size 0.001 0.009 5 Graph Size 0.001 0.003 4
Degree 0.003 0.009 4 Degree 0.002 0.004 4
SCC 0.046 0.036 5 Components 0.011 0.009 5
Structure 2.304 2.868 5 Structure 0.799 0.684 1
Triangles 0.787 0.671 5
Average CPU-time, stdev, needed for extracting the features of a given class.
protocol: some numbers
∙ |SCCS∆| in 1 : 100;
∙ |A| in 10 : 5, 000;
∙ | → | in 25 : 270, 000 (Erdös-Rényi, p uniformly distributed) ;
∙ Overall 10, 000 AFs.
∙ Cutoff time of 900 seconds (value also for crashed, timed-out or ran out of memory).
∙ EPMs both for Regression (Random forests) and Classification (M5-Rules) using WEKA;
∙ Evaluation using a 10-fold cross-validation approach on a uniform random
permutation of instances.
result 1: best features for prediction
Solver B1 B2 B3
AspartixM number of arguments density of directed graph size of max. SCC
PrefSAT density of directed graph number of SCCs aperiodicity
NAD-Alg density of directed graph CPU-time for density CPU-time for Eulerian
SSCp density of directed graph number of SCCs size of the max SCC
Determined by a greedy forward search based on the Correlation-based Feature
Selection (CFS) attribute evaluator.
AF structure SCCs CPU-time for feature extraction
result 2: predicting (log)runtime
RSME of Regression (Lower is better)
B1 B2 B3 DG UG SCC All
AspartixM 0.66 0.49 0.49 0.48 0.49 0.52 0.48
PrefSAT 1.39 0.93 0.93 0.89 0.92 0.94 0.89
NAD-Alg 1.48 1.47 1.47 0.77 0.57 1.61 0.55
SSCp 1.36 0.80 0.78 0.75 0.75 0.79 0.74
∑n
i=1
(
log10( ti ) − log10( yi )
)2
n
AF structure SCCs CPU-time for feature extraction Undirect Graph
result 3: best features for classification
C-B1 C-B2 C-B3
number of arguments density of directed graph min attackers
Determined by a greedy forward search based on the Correlation-based Feature
Selection (CFS) attribute evaluator.
AF structure Attackers
result 4: classification, i.e. selecting the best solver for a given af
Classification (Higher is better)
|A| density min attackers DG UG SCC All
Accuracy 48.5% 70.1% 69.9% 78.9% 79.0% 55.3% 79.5%
Prec. AspartixM 35.0% 64.6% 63.7% 74.5% 74.9% 42.2% 76.1%
Prec. PrefSAT 53.7% 67.8% 68.1% 79.6% 80.5% 60.4% 80.1%
Prec. NAD-Alg 26.5% 69.2% 69.0% 81.7% 85.1% 35.3% 86.0%
Prec. SSCp 54.3% 73.0% 72.7% 76.6% 76.8% 57.8% 77.2%
AF structure Attackers Undirect Graph SCCs
result 5: algorithm selection
Metric: Fastest
(max. 1007)
AspartixM 106
NAD-Alg 170
PrefSAT 278
SSCp 453
EPMs Regression 755
EPMs Classification 788
Metric: IPC*
(max. 1007)
NAD-Alg 210.1
AspartixM 288.3
PrefSAT 546.7
SSCp 662.4
EPMs Regression 887.7
EPMs Classification 928.1
*Scale of (log)relative performance
..the frontier
belief revision and argumentation
[FKS09] [FGS13]
belief revision and argumentation
Potential cross-fertilisation
Argumentation in Belief Revision
∙ Justification-based truth maintenance
system
∙ Assumption-based truth maintenance
system
Some conceptual differences:
in revision, external beliefs are
compared with internal beliefs and,
after a selection process, some
sentences are discarded, other
ones are accepted. [FKS09]
Belief Revision in Argumentation
∙ Changing by adding or deleting an
argument.
∙ Changing by adding or deleting a set of
arguments.
∙ Changing the attack (and/or defeat)
relation among arguments.
∙ Changing the status of beliefs (as
conclusions of arguments).
∙ Changing the type of an argument (from
strict to defeasible, or vice versa).
abstract dialectical framework
[Bre+13]
abstract dialectical framework
Dependency Graph + Acceptance Conditions
argumentation and social networks
[LM11] [ET13]
argumentation and social networks
a:The Wonder-Phone is the
best new generation
phone.
+20 -20
b: No, the Magic-Phone is
the best new generation
phone.
+ 20 - 20
c: here is a [link] to a review
of the Magic-Phone giving
poor scores due to bad
battery performance
+60 -10.
d: author of c is ignorant, since
subsequent reviews noted that
only one of the first editions
had such problems: [links].
+10 -40
e: d is wrong. I found out
c) knows about that but
withheld the information.
Here's a [link] to another
thread proving it!
+40 -10
argumentation and social networks
a:The Wonder-Phone is the best
new generation phone.
+20 -20 b: No, the Magic-Phone is the
best new generation phone.
+ 20 - 20
c: here is a [link] to a review
of the Magic-Phone giving
poor scores due to bad
battery performance
+60 -10.
d: author of c is ignorant, since
subsequent reviews noted that
only one of the first editions had
such problems: [links].
+10 -40
e: d is wrong. I found out c)
knows about that but
withheld the information.
Here's a [link] to another
thread proving it!
+40 -10
argumentation and social networks
a:The Wonder-Phone is the best
new generation phone.
+20 -20
b: No, the Magic-Phone is the best
new generation phone.
+ 20 - 20
c: here is a [link] to a review of the Magic-
Phone giving poor scores due to bad
battery performance
+60 -10.
d: author of c is ignorant, since subsequent
reviews noted that only one of the first editions
had such problems: [links].
+10 -40
e: d is wrong. I found out c)
knows about that but
withheld the information.
Here's a [link] to another
thread proving it!
+40 -10
argumentation and social networks
http://www.quaestio-it.com/
argument mining
[CV12] [Bud+14]
argument mining
http://www-sop.inria.fr/NoDE/
http://corpora.aifdb.org/
natural language interfaces
[CTO14] [Cam+14]
natural language interfaces
a1 : σA ⇒ γ
a2 : σB ⇒ ¬γ
a3 : ⇒ a1 a2
First Scenario
a1: Alice suggests to move in together with Jane
a2: Stacy suggests otherwise because Jane might have a hidden agenda
a3: Stacy is your best friend
a1 a2 don’t know
% agreement 12.5 68.8 18.8
• • • • •
Second Scenario
a1: TV1 suggests that tomorrow will rain
a2: TV2 suggests that tomorrow will be cloudy but will not rain
a3: TV2 is generally more accurate than TV1
a1 a2 don’t know
% agreement 5.0 50.0 45.0
natural language interfaces
Scrutable Autonomous Systems (in particular from 7’ 30”)
..conclusion
Hal’s Argument
..credits
credits
Template
adapted from mtheme https://github.com/matze/mtheme

Argumentation in Artificial Intelligence

  • 1.
    argumentation in artificialintelligence 20 Years After Dung’s Work . Federico Cerutti† xxvi • vii • mmxv † University of Aberdeen
  • 2.
    P. Baroni U. Brescia T.J. M. Bench-Capon U. Liverpool C. Cayrol IRIT P. E. Dunne U. Liverpool M. Giacomin U. Brescia A. Hunter UCL H. Li U. Aberdeen S. Modgil KCL T. J. Norman U. Aberdeen N. Oren U. Aberdeen C. Reed U. Dundee G. R. Simari U. Nacional der Sur A. Toniolo U. Aberdeen M. Vallati U. Huddersfield S. Woltran TU Wien J. Leite New U. Lisbon S. Parson KCL M. Thimm U. Koblenz
  • 3.
    This tutorial wassponsored by the U.S. Army Research Laboratory and the U.K. Ministry of Defence, under Agreement Number W911NF-06-3-0001. The views and conclusions contained in this document are those of the author(s) and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Army Research Laboratory, the U.S. Government, the U.K. Ministry of Defence or the U.K. Government. The U.S. and U.K. Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon. The tutor acknowledges the contribution of the Santander Universities Network in supporting his travel
  • 4.
    outline ∙ Introduction Whybother? ∙ Dung’s AF ∙ Argumentation Schemes ∙ A Semantic-Web view of Argumentation ∙ Frameworks ∙ CISpaces ∙ Algorithms and Implementations ∙ The frontier
  • 5.
    outline ∙ Introduction ∙ Dung’sAF Syntax, semantics, current state of research ∙ Argumentation Schemes ∙ A Semantic-Web view of Argumentation ∙ Frameworks ∙ CISpaces ∙ Algorithms and Implementations ∙ The frontier
  • 6.
    outline ∙ Introduction ∙ Dung’sAF ∙ Argumentation Schemes Arguments in human experience ∙ A Semantic-Web view of Argumentation ∙ Frameworks ∙ CISpaces ∙ Algorithms and Implementations ∙ The frontier
  • 7.
    outline ∙ Introduction ∙ Dung’sAF ∙ Argumentation Schemes ∙ A Semantic-Web view of Argumentation AIF, OVA+, and other tools ∙ Frameworks ∙ CISpaces ∙ Algorithms and Implementations ∙ The frontier
  • 8.
    outline ∙ Introduction ∙ Dung’sAF ∙ Argumentation Schemes ∙ A Semantic-Web view of Argumentation ∙ Frameworks Abstract, instantiated, probabilistic frameworks: kite-level view ∙ CISpaces ∙ Algorithms and Implementations ∙ The frontier
  • 9.
    outline ∙ Introduction ∙ Dung’sAF ∙ Argumentation Schemes ∙ A Semantic-Web view of Argumentation ∙ Frameworks ∙ CISpaces „One Ring to bring them all and in the darkness bind them” ∙ Algorithms and Implementations ∙ The frontier
  • 10.
    outline ∙ Introduction ∙ Dung’sAF ∙ Argumentation Schemes ∙ A Semantic-Web view of Argumentation ∙ Frameworks ∙ CISpaces ∙ Algorithms and Implementations …and how to choose among them ∙ The frontier
  • 11.
    outline ∙ Introduction ∙ Dung’sAF ∙ Argumentation Schemes ∙ A Semantic-Web view of Argumentation ∙ Frameworks ∙ CISpaces ∙ Algorithms and Implementations ∙ The frontier
  • 12.
    what is missing Alot Dialogues Argumentation and trust Argumentation in multi-agent systems Several approaches to represent arguments Several extensions to Dung’s framework Several frontier approaches …
  • 13.
  • 14.
    There is nomilk in the shop and the milk you have is sour. Beer Milk 1 0
  • 15.
    There is acoffee machine and fresh coffee in the cupboard. Beer makes you sick Beer Milk Coffee? 0 0 1
  • 16.
    There is freshmilk in your bag because you went to the shop earlier. The Principal is visiting later today, so you had better not alcohol Beer Milk 0 1
  • 17.
    There is nomilk in the shop and the milk you have is sour. There is a coffee machine and fresh coffee in the cupboard. Beer makes you sick There is fresh milk in your bag because you went to the shop earlier. The Principal is visiting later today, so you had better not alcohol Beer Milk Coffee? 1 0 0 0 1 0 1
  • 18.
    You should drinkmilk You should drink beer There is no milk in the shop and the milk you have is sour. There is a coffee machine and fresh coffee in the cupboard. Beer makes you sick You should drink coffee There is fresh milk in your bag because you went to the shop earlier. The Principal is visiting later today, so you had better not
  • 19.
  • 20.
  • 21.
    Definition 1 A Dungargumentation framework AF is a pair ⟨A, → ⟩ where A is a set of arguments, and → is a binary relation on A i.e. →⊆ A × A.
  • 22.
    A semantics isa way to identify sets of arguments (i.e. extensions) “surviving the conflict together”
  • 23.
  • 24.
    (some) semantics properties ∙Conflict-freeness (Def. 2) an attacking and an attacked argument can not stay together (∅ is c.f. by def.) ∙ Admissibility (Def. 5) ∙ Strong-Admissibility (Def. 7) ∙ Reinstatement (Def. 8) ∙ I-Maximality (Def. 9) ∙ Directionality (Def. 12)
  • 25.
    (some) semantics properties ∙Conflict-freeness (Def. 2) ∙ Admissibility (Def. 5) the extension should be able to defend itself, „fight fire with fire” (∅ is adm. by def.) ∙ Strong-Admissibility (Def. 7) ∙ Reinstatement (Def. 8) ∙ I-Maximality (Def. 9) ∙ Directionality (Def. 12)
  • 26.
    (some) semantics properties ∙Conflict-freeness (Def. 2) ∙ Admissibility (Def. 5) ∙ Strong-Admissibility (Def. 7) no self-defeating arguments (∅ is strong adm. by def.) ∙ Reinstatement (Def. 8) ∙ I-Maximality (Def. 9) ∙ Directionality (Def. 12)
  • 27.
    (some) semantics properties ∙Conflict-freeness (Def. 2) ∙ Admissibility (Def. 5) ∙ Strong-Admissibility (Def. 7) ∙ Reinstatement (Def. 8) if you defend some argument you should take it on board (∅ satisfies the principle only if there are no unattacked arguments) ∙ I-Maximality (Def. 9) ∙ Directionality (Def. 12)
  • 28.
    (some) semantics properties ∙Conflict-freeness (Def. 2) ∙ Admissibility (Def. 5) ∙ Strong-Admissibility (Def. 7) ∙ Reinstatement (Def. 8) ∙ I-Maximality (Def. 9) no extension is a proper subset of another one ∙ Directionality (Def. 12)
  • 29.
    (some) semantics properties ∙Conflict-freeness (Def. 2) ∙ Admissibility (Def. 5) ∙ Strong-Admissibility (Def. 7) ∙ Reinstatement (Def. 8) ∙ I-Maximality (Def. 9) ∙ Directionality (Def. 12) a (set of) argument(s) is affected only by its ancestors in the attack relation
  • 30.
    You should drinkmilk You should drink beer There is no milk in the shop and the milk you have is sour. There is a coffee machine and fresh coffee in the cupboard. Beer makes you sick You should drink coffee There is fresh milk in your bag because you went to the shop earlier. The Principal is visiting later today, so you had better not
  • 31.
    You should drinkmilk You should drink beer There is no milk in the shop and the milk you have is sour. There is a coffee machine and fresh coffee in the cupboard. Beer makes you sick You should drink coffee There is fresh milk in your bag because you went to the shop earlier. The Principal is visiting later today, so you had better not b a c d f e gh
  • 32.
    complete extension (def.15) Admissibility and reinstatement Set of conflict-free arguments s.t. each defended argument is included b a c d f e gh    {a, c, d, e, g}, {a, b, c, e, g}, {a, c, e, g}   
  • 33.
    grounded extension (def.16) Strong Admissibility Minimum complete extension b a c d f e gh    {a, c, e, g}   
  • 34.
    preferred extension (def.17) Admissibility and maximality Maximum complete extensions b a c d f e gh    {a, c, d, e, g}, {a, b, c, e, g}   
  • 35.
    stable extension (def.17) „orror vacui:” the absence of odd-length cycles is a sufficient condition for existence of stable extensions Complete extensions attacking all the arguments outside b a c d f e gh    {a, c, d, e, g}, {a, b, c, e, g}   
  • 36.
    complete labellings (def.20) Max. UNDEC ≡ Grounded b a c d f e gh    {a, c, e, g}   
  • 37.
    complete labellings (def.20) Max. IN ≡ Preferred b a c d f e gh    {a, c, d, e, g}   
  • 38.
    complete labellings (def.20) Max. IN ≡ Preferred b a c d f e gh    {a, b, c, e, g}   
  • 39.
    complete labellings (def.20) No UNDEC ≡ Stable b a c d f e gh    {a, c, d, e, g}   
  • 40.
    complete labellings (def.20) No UNDEC ≡ Stable b a c d f e gh    {a, b, c, e, g}   
  • 41.
    properties of semantics COGR PR ST D-conflict-free Yes Yes Yes Yes D-admissibility Yes Yes Yes Yes D-strongly admissibility No Yes No No D-reinstatement Yes Yes Yes Yes D-I-maximality No Yes Yes Yes D-directionality Yes Yes Yes No
  • 42.
  • 43.
    complexity σ = COσ = GR σ = PR σ = ST existsσ trivial trivial trivial np-c caσ np-c polynomial np-c np-c saσ polynomial polynomial Πp 2 -c conp-c verσ polynomial polynomial conp-c polynomial neσ np-c polynomial np-c np-c
  • 44.
    an exercise a b cd e f g h i l m no p
  • 45.
    an exercise a b cd e f g h i l m no p ECO(∆) =    {a, c}, {a, c, f}, {a, c, m}, {a, c, f, m}, {a, c, f, l}, {a, c, g, m}   
  • 46.
    an exercise a b cd e f g h i l m no p EGR(∆) =    {a, c}   
  • 47.
    an exercise a b cd e f g h i l m no p EPR(∆) =    {a, c, f, m}, {a, c, f, l}, {a, c, g, m}   
  • 48.
    an exercise a b cd e f g h i l m no p EST (∆) =      
  • 49.
  • 50.
    skepticisms and comparisonsof sets of extensions [BG09b]
  • 51.
    skepticisms and comparisonsof sets of extensions ..GR. CO . PR . ST ⪯S ⊕ relation Comparing extensions individually: E1 ⪯E ∩+ E2 iff ∀E2 ∈ E2, ∃E1 ∈ E1: E1 ⊆ E2 and E1 ⪯E ∪+ E2 iff ∀E1 ∈ E1, ∃E2 ∈ E2: E1 ⊆ E2
  • 52.
  • 53.
    signatures The signature ofa semantics is the collection of all possible sets of extensions an AF can possess under a semantics (Def. 25). S ⊆ 2A : ∙ ArgsS = ∪ S∈S S; ∙ PairsS = {⟨a, b⟩ | ∃S ∈ S s.t. {a, b} ⊆ S}. • • • • • S = { { a, d, e }, { b, c, e }, { a, b } } ArgsS = {a, b, c, d, e} PairsS = {⟨a, b⟩, ⟨a, d⟩, ⟨a, e⟩, ⟨b, c⟩, ⟨b, e⟩, ⟨c, e⟩, ⟨d, e⟩}
  • 54.
    signatures ∙ Incomparable (Def.26): A ⊆ B iff A = B „Maximal” ∙ Tight (Def. 27): if S ∪ {a} ̸∈ S then ∃b ∈ S s.t. ⟨a, b⟩ ̸∈ PairsS ∙ Adm-Closed (Def. 28): if ⟨a, b⟩ ∈ PairsS ∀a, b ∈ A ∪ B, A ∪ B ∈ S Stable iff incomparable and tight Preferred iff non-empty, incomparable and adm-closed
  • 55.
    signatures ∙ Incomparable (Def.26): A ⊆ B iff A = B ∙ Tight (Def. 27): if S ∪ {a} ̸∈ S then ∃b ∈ S s.t. ⟨a, b⟩ ̸∈ PairsS if an argument does not occur in some extension there must be a reason for that (typically a conflict) ∙ Adm-Closed (Def. 28): if ⟨a, b⟩ ∈ PairsS ∀a, b ∈ A ∪ B, A ∪ B ∈ S Stable iff incomparable and tight Preferred iff non-empty, incomparable and adm-closed
  • 56.
    signatures ∙ Incomparable (Def.26): A ⊆ B iff A = B ∙ Tight (Def. 27): if S ∪ {a} ̸∈ S then ∃b ∈ S s.t. ⟨a, b⟩ ̸∈ PairsS ∙ Adm-Closed (Def. 28): if ⟨a, b⟩ ∈ PairsS ∀a, b ∈ A ∪ B, A ∪ B ∈ S „Admissibility” Stable iff incomparable and tight Preferred iff non-empty, incomparable and adm-closed
  • 57.
    signatures S = {{ a, d, e }, { b, c, e }, { a, b } } incomparable and adm-closed (⟨a, b⟩ ∈ PairsS ∀a, b ∈ A ∪ B, A ∪ B ∈ S)
  • 58.
    signatures S = {{ a, d, e }, { b, c, e }, { a, b } } incomparable and adm-closed (⟨a, b⟩ ∈ PairsS ∀a, b ∈ A ∪ B, A ∪ B ∈ S) a b c d f e
  • 59.
    exercise S = {{ a, d, e }, { b, c, e }, { a, b, d } } Does an AF ∆ having EPR(∆) = S exist?
  • 60.
    exercise S = {{ a, d, e }, { b, c, e }, { a, b, d } } Does an AF ∆ having EPR(∆) = S exist? No PairsS = {⟨a, b⟩, ⟨a, d⟩, ⟨a, e⟩, ⟨b, c⟩, ⟨b, e⟩, ⟨c, e⟩, ⟨d, e⟩, ⟨b, d⟩} b, d ∈ { a, d, e } ∪ { a, b, d } but { a, d, e } ∪ { a, b, d } = { a, b, d, e } /∈ S
  • 61.
  • 62.
    decomposability AF1 AF2 AF3 Is it possibleto consider a (partial) argumentation framework as a black-box and focus only on the input/output interface?
  • 63.
    decomposability A semantics is: ∙Fully decomposable (Def. 35): ∙ any combination of “local” labellings gives rise to a global labelling; ∙ any global labelling arises from a set of “local” labellings ∙ Top-Down decomposable (Def. 36): combining “local” labellings you get all global labellings, possibly more ∙ Bottom-Up decomposable (Def. 37): combining “local” labellings you get only global labellings, possibly less
  • 64.
    decomposability A semantics is: ∙Fully decomposable (Def. 35): ∙ any combination of “local” labellings gives rise to a global labelling; ∙ any global labelling arises from a set of “local” labellings ∙ Top-Down decomposable (Def. 36): combining “local” labellings you get all global labellings, possibly more ∙ Bottom-Up decomposable (Def. 37): combining “local” labellings you get only global labellings, possibly less
  • 65.
    decomposability A semantics is: ∙Fully decomposable (Def. 35): ∙ any combination of “local” labellings gives rise to a global labelling; ∙ any global labelling arises from a set of “local” labellings ∙ Top-Down decomposable (Def. 36): combining “local” labellings you get all global labellings, possibly more ∙ Bottom-Up decomposable (Def. 37): combining “local” labellings you get only global labellings, possibly less
  • 66.
    decomposability A semantics is: ∙Fully decomposable (Def. 35): ∙ any combination of “local” labellings gives rise to a global labelling; ∙ any global labelling arises from a set of “local” labellings ∙ Top-Down decomposable (Def. 36): combining “local” labellings you get all global labellings, possibly more ∙ Bottom-Up decomposable (Def. 37): combining “local” labellings you get only global labellings, possibly less CO ST GR PR Full decomposability Yes Yes No No Top-down decomposability Yes Yes Yes Yes Bottom-up decomposability Yes Yes No No
  • 67.
  • 68.
    what is anargument? The Argument Clinic
  • 69.
    what is anargument? Argumentation is a verbal, social, and rational activity aimed at convincing a reasonable critic of the acceptability of a standpoint by putting forward a constellation of propositions justifying or refuting the proposition expressed in the standpoint. Some elements of dialogue in the handout, but they will not be considered here.
  • 70.
  • 71.
    practical inference: anexample of argumentation scheme Premises: Goal Premise Bringing about Sn is my goal Means Premise In order to bring about Sn, I need to bring about Si Conclusions: Therefore, I need to bring about Si.
  • 72.
    practical inference: anexample of argumentation scheme Premises: Goal Premise Bringing about Sn is my goal Means Premise In order to bring about Sn, I need to bring about Si Conclusions: Therefore, I need to bring about Si. Critical questions: Other-Means Q. Are there alternative possible actions to bring about Si that could also lead to the goal? Best-Means Q. Is Si the best (or most favourable) of the alternatives? Other-Goals Q. Do I have goals other than Si whose achievement is prefer- able and that should have priority? Possibility Q. Is it possible to bring about Si in the given circumstances? Side Effects Q. Would bringing about Si have known bad consequences that ought to be taken into account?
  • 73.
    an example Goal Bringing aboutbeing rich is my goal I want to be rich Means/Plan In order to bring about being rich I need to bring about having a job To be rich I need a job Action Therefore I need to bring about having a job Therefore I have to search for a job.
  • 74.
  • 75.
    ..a semantic-web viewof argumentation
  • 76.
  • 77.
    Node Graph (argument network) has-a Information Node (I-Node) is-a Scheme Node S-Node has-a Edge is-a Ruleof inference application node (RA-Node) Conflict application node (CA-Node) Preference application node (PA-Node) Derived concept application node (e.g. defeat) is-a ... ContextScheme Conflict scheme contained-in Rule of inference scheme Logical inference scheme Presumptive inference scheme ... is-a Logical conflict scheme is-a ... Preference scheme Logical preference scheme is-a ... Presumptive preference scheme is-a uses uses uses
  • 78.
  • 79.
  • 80.
  • 81.
  • 82.
  • 83.
    Value Based AF ExtendedAF AFRA Bipolar AF
  • 84.
    value based argumentationframework [BA09]
  • 85.
    value based argumentationframework ..a2 LC, FC .. a3 LC, FH .. a1 LC a1 Hal should not take insulin, thus allowing Carla to be alive (value of Life for Carla LC); a2 Hal should take insulin and compensate Carla, thus both of them stay alive (value of Life for Carla, and the Freedom — of using money — for Carla FC); a3 Hal should take insulin and that Carla should buy insulin, thus both of them stay alive (value of Life for Carla, and the Freedom — of using money — for Hal FH).
  • 86.
    Value Based AF ExtendedAF AFRA Bipolar AF
  • 87.
  • 88.
    extended argumentation framework a1“Today will be dry in London since the BBC forecast sunshine”; a2 “Today will be wet in London since CNN forecast rain”; a3 “But the BBC are more trustworthy than CNN”; a4 “However, statistically CNN are more accurate forecasters than the BBC”; a5 “Basing a comparison on statistics is more rigorous and rational than basing a comparison on your instincts about their relative trustworthiness”.
  • 89.
    Value Based AF ExtendedAF AFRA Bipolar AF
  • 90.
    afra: argumentation frameworkwith recursive attacks [Bar+11]
  • 91.
    afra: argumentation frameworkwith recursive attacks a1 There is a last minute offer for Gstaad: therefore I should go to Gstaad; a2 There is a last minute offer for Cuba: therefore I should go to Cuba; a3 I do like to ski; a4 The weather report informs that in Gstaad there were no snowfalls since one month: therefore it is not possible to ski in Gstaad; a5 It is anyway possible to ski in Gstaad, thanks to a good amount of artificial snow.
  • 92.
    Value Based AF ExtendedAF AFRA Bipolar AF
  • 93.
  • 94.
    bipolar argumentation framework ..a3.a2. a1. a4 a1 in favour of m, with premises {s, f, (s ∧ f) → m}; a2 in favour of ¬s, with premises {w, w → ¬s}; a3 in favour of ¬w, with premises {b, b → ¬w}; a4 in favour of f, with premises {l, l → f} m Mary (who is small) is the killer f the killer is female s the killer is small w a witness says that the killer is tall b the witness is short-sighted l the killer has long hair and wear lipstick
  • 95.
  • 96.
  • 97.
    delp: defeasible logicprogramming [SL92] [GS14]
  • 98.
    delp: defeasible logicprogramming Π non-defeasible knowledge ⟨Π, ∆⟩ ∆ defeasible knowledge facts i.e. atomic information strict rules Lo ←− L1, . . . , Ln defeasible rules Lo −< L1, . . . , Ln Def. 40 Let H be a ground literal: ⟨A, H⟩ is an argument structure if: ∙ there exists a defeasible derivation* for H from ⟨Π, A⟩; ∙ there are no defeasible derivations from ⟨Π, A⟩ of contradictory literals; ∙ and there is no proper subset A′ ⊂ A such that A′ satisfies (1) and (2). *A defeasible derivation for Q from ⟨Π, ∆⟩, is L1, L2, . . . , Ln = Q s.t.: (i) Li is a fact; or (ii) ∃Ri ∈ ⟨Π, ∆⟩ with head Li and body B1, . . . , Bk, and every literal of the body is an element Lj of the sequence with j < i.
  • 99.
    delp: defeasible logicprogramming Def. 41 ⟨B, S⟩ is a counter-argument for ⟨A, H⟩ at literal P, if there exists a sub-argument ⟨C, P⟩ of ⟨A, H⟩ such that P and S disagree, that is, there exist two contradictory literals that have a strict derivation from Π ∪ {S, P}. The literal P is referred as the counter-argument point and ⟨C, P⟩ as the disagreement sub-argument. Def. 42 Let ⟨B, S⟩ be a counter-argument for ⟨A, H⟩ at point P, and ⟨C, P⟩ the disagreement sub-argument. If ⟨B, S⟩ ≻* ⟨C, P⟩, then ⟨B, S⟩ is a proper defeater for ⟨A, H⟩. If ⟨B, S⟩ ⊁ ⟨C, P⟩ and ⟨C, P⟩ ⊁ ⟨B, S⟩, then ⟨B, S⟩ is a blocking defeater for ⟨A, H⟩. ⟨B, S⟩ is a defeater for ⟨A, H⟩ if ⟨B, S⟩ is either a proper or blocking defeater for ⟨A, H⟩. *≻ is an argument comparison criterion.
  • 100.
    delp: defeasible logicprogramming Π1 ∆1    cloudy dry_season waves vacation ¬working ←− vacation       surf −< nice, spare_time nice −< waves spare_time −< ¬busy ¬busy −< ¬working ¬nice −< rain rain −< cloudy ¬rain −< dry_season   
  • 101.
    delp: defeasible logicprogramming Π1    cloudy dry_season waves vacation ¬working ←− vacation    A0 =    surf −< nice, spare_time nice −< waves spare_time −< ¬busy ¬busy −< ¬working    A1 = {¬nice −< rain; rain −< cloudy} A2 = {nice −< waves} A3 = {rain −< cloudy} A4 = {¬rain −< dry_season}
  • 102.
  • 103.
    assumption based argumentationframework [Bon+97] [Ton14]
  • 104.
    assumption based argumentationframework ⟨L, R, A, ⟩ L R A ⊆ L : A → L language set of rules assumptions contrariness Def. 45 An argument for the claim σ ∈ L supported by A ⊆ A (A ⊢ σ) is a deduction for σ supported by A (and some R ⊆ R).* Def. 46 An argument A1 ⊢ σ1 attacks an argument A2 ⊢ σ2 iff σ1 is the contrary of one of the assumptions in A2. *A (finite) tree with nodes labelled by sentences in L or by τ /∈ L, the root labelled by σ, leaves either τ or sentences in A, non-leaves σ′ with, as children, the elements of the body of some rule in R with head σ′.
  • 105.
    assumption based argumentationframework R = { innocent(X) ←− notGuilty(X); killer(oj) ←− DNAshows(oj), DNAshows(X) ⊃ killer(X); DNAshows(X) ⊃ killer(X) ←− DNAfromReliableEvidence(X); evidenceUnreliable(X) ←− collected(X, Y), racist(Y); DNAshows(oj) ←−; collected(oj, mary) ←−; racist(mary) ←− } A = { notGuilty(oj); DNAfromReliableEvidence(oj) } notGuilty(oj) = killer(oj), DNAfromReliableEvidence(oj) = evidenceUnreliale(oj).
  • 106.
  • 107.
  • 108.
  • 109.
    aspic+ Def. 47 An argumentationsystem is as tuple AS = ⟨L, R, ν⟩ where: ∙ : L → 2L : a contrariness function s.t. if φ ∈ ψ and: ∙ ψ /∈ φ, then φ is a contrary of ψ; ∙ ψ ∈ φ, then φ is a contradictory of ψ (φ = –ψ); ∙ R = Rd ∪ Rs: strict (Rs) and defeasible (Rd) inference rules s.t. Rd ∩ Rs = ∅; ∙ ν : Rd → L, is a partial function.* P ⊆ L is consistent iff ∄φ, ψ ∈ P s.t. φ ̸∈ ψ, otherwise is inconsistent. A knowledge base in an AS is Kn ∪ Kp = K ⊆ L; {Kn, Kp} is a partition of K; Kn contains axioms that cannot be attacked; Kp contains ordinary premises that can be attacked. An argumentation theory is a pair AT = ⟨AS, K⟩. *Informally, ν(r) is a wff in L which says that the defeasible rule r is applicable.
  • 110.
    aspic+ Def. 48 An argumenta on the basis of a AT = ⟨AS, K⟩, AS = ⟨L, R, ν⟩ is: 1. φ if φ ∈ K with: Prem(a) = {φ}; Conc(a) = φ; Sub(a) = {φ}; Rules(a) = DefRules(a) = ∅; TopRule(a) = undefined. 2. a1, . . . , an −→ / =⇒ ψ if a1, . . . , an, with n ≥ 0, are arguments such that there exists a strict/defeasible rule r = Conc(a1), . . . , Conc(an) −→ / =⇒ ψ ∈ Rs/Rd. Prem(a) = ∪n i=1 Prem(ai); Conc(a) = ψ; Sub(a) = ∪n i=1 Sub(ai) ∪ {a}; Rules(a) = ∪n i=1 Rules(ai) ∪ {r}; DefRules(a) = {d | d ∈ Rules(a) ∩ Rd}; TopRule(a) = r a is strict if DefRules(a) = ∅, otherwise defeasible; firm if Prem(a) ⊆ Kn, otherwise plausible.
  • 111.
    aspic+ Def. 49 Given aand b arguments, a defeats b iff a undercuts, successfully rebuts or successfully undermines b, where: ∙ a undercuts b (on b′ ) iff Conc(a) /∈ ν(r) for some b′ ∈ Sub(b) s.t. r = TopRule(b′ ) ∈ Rd; ∙ a successfully rebuts b (on b′ ) iff Conc(a) /∈ φ for some b′ ∈ Sub(b) of the form b′′ 1 , . . . , b′′ n =⇒ –φ, and a b′ ; ∙ a successfully undermines b (on φ) iff Conc(a) /∈ φ, and φ ∈ Prem(b) ∩ Kp, and a φ. Def. 50 AF is the abstract argumentation framework defined by AT = ⟨AS, K⟩ if A is the smallest set of all finite arguments constructed from K; and → is the defeat relation on A.
  • 112.
    aspic+ Rationality postulates P1: directconsistency iff {Conc(a) | a ∈ S} is consistent; P2: indirect consistency iff Cl({Conc(a) | a ∈ S}) is consistent; P3: closure iff {Conc(a) | a ∈ S} = Cl({Conc(a) | a ∈ S}); P4: sub-argument closure iff ∀a ∈ S, Sub(a) ⊆ S. ∙ close under transposition If φ1, . . . , φn −→ ψ ∈ Rs, then ∀i = 1 . . . n, φ1, . . . , φi−1, ¬ψ, φi+1, . . . , φn =⇒ ¬φi ∈ Rs. ∙ Cl(Kn) is consistent; ∙ the argument ordering ⪯ is reasonable, namely: ∙ ∀a, b, if a is strict and firm, and b is plausible or defeasible, then a b; ∙ ∀a, b, if b is strict and firm, then b a; ∙ ∀a, a′ , b such that a′ is a strict continuation of {a}, if a b then a′ b, and if b a, then b a′ ; ∙ given a finite set of arguments {a1, . . . , an}, let a+i be some strict continuation of {a1, . . . , ai−1, ai+1, . . . , an}. Then it is not the case that ∀i, a+i ai.
  • 113.
    aspic+ Kp = {Snores; Professor } Rd = { Snores =⇒d1 Misbehaves; Misbehaves =⇒d2 AccessDenied; Professor =⇒d3 AccessAllowed } AccesAllowed = −AccessDenied Snores <′ Professor; d1 < d2; d1 < d3; d3 < d2.
  • 114.
  • 115.
  • 116.
  • 117.
  • 118.
    deductive argumentation Def. 53 Adeductive argument is an ordered pair ⟨Φ, α⟩ where Φ ⊢i α. Φ is the support, or premises, or assumptions of the argument, and α is the claim, or conclusion, of the argument. consistency constraint when Φ is consistent (not essential, cf. paraconsistent logic). minimality constraint when there is no Ψ ⊂ Φ such that Ψ ⊢ α Def. 56 If ⟨Φ, α⟩ and ⟨Ψ, β⟩ are arguments, then ∙ ⟨Φ, α⟩ rebuts ⟨Ψ, β⟩ iff α ⊢ ¬β ∙ ⟨Φ, α⟩ undercuts ⟨Ψ, β⟩ iff α ⊢ ¬ ∧ Ψ
  • 119.
    deductive argumentation Def. 55 Aclassical logic argument from a set of formulae ∆ is a pair ⟨Φ, α⟩ such that Φ ⊆ ∆ Φ ̸⊢ ⊥ Φ ⊢ α there is no Φ′ ⊂ Φ such that Φ′ ⊢ α. Def. 57 Let a and b be two classical arguments. We define the following types of classical attack. a is a direct undercut of b if ¬Claim(a) ∈ Support(b) a is a classical defeater of b if Claim(a) ⊢ ¬ ∧ Support(b) a is a classical direct defeater of b if ∃ϕ ∈ Support(b) s.t. Claim(a) ⊢ ¬ϕ a is a classical undercut of b if ∃Ψ ⊆ Support(b) s.t. Claim(a) ≡ ¬ ∧ Ψ a is a classical direct undercut of b if ∃ϕ ∈ Support(b) s.t. Claim(a) ≡ ¬ϕ a is a classical canonical undercut of b if Claim(a) ≡ ¬ ∧ Support(b). a is a classical rebuttal of b if Claim(a) ≡ ¬Claim(b). a is a classical defeating rebuttal of b if Claim(a) ⊢ ¬Claim(b).
  • 120.
    deductive argumentation .. bp(high) ok(diuretic) bp(high) ∧ok(diuretic) → give(diuretic) ¬ok(diuretic) ∨ ¬ok(betablocker) give(diuretic) ∧ ¬ok(betablocker) . bp(high) ok(betablocker) bp(high) ∧ ok(betablocker) → give(betablocker) ¬ok(diuretic) ∨ ¬ok(betablocker) give(betablocker) ∧ ¬ok(diuretic) . symptom(emphysema), symptom(emphysema) → ¬ok(betablocker) ¬ok(betablocker) ...
  • 121.
  • 122.
    a logic forclinical knowledge [HW12] [Wil+15]
  • 123.
    a logic forclinical knowledge Def. 58 Given treatments τ1 and τ2, X ⊆ evidence, there are three kinds of inductive argument: 1. ⟨X, τ1 > τ2⟩: evidence in X supports the claim that treatment τ1 is superior to τ2. 2. ⟨X, τ1 ∼ τ2⟩: evidence in X supports the claim that treatment τ1 is equivalent to τ2 3. ⟨X, τ1 < τ2⟩: evidence in X supports the claim that treatment τ1 is inferior to τ2. Def. 59 If the claim of argument ai is ϵi and the claim of argument aj is ϵj then ai conflicts with aj whenever: 1. ϵi = τ1 > τ2, and ( ϵj = τ1 ∼ τ2 or ϵj = τ1 < τ2 ). 2. ϵi = τ1 ∼ τ2, and ( ϵj = τ1 > τ2 or ϵj = τ1 < τ2 ). 3. ϵi = τ1 < τ2, and ( ϵj = τ1 > τ2 or ϵj = τ1 ∼ τ2 ).
  • 124.
    a logic forclinical knowledge Def. 60 For any pair of arguments ai and aj, and a preference relation R, ai attacks aj with respect to R iff ai conflicts with aj and it is not the case that aj is strictly preferred to ai according to R. A domain-specific benefit preference relation is defined in [HW12] Def. 61 (Meta arguments) For a ∈ Arg(evidence), if there is an e ∈ support(a) such that: ∙ e is not statistically significant, and e is not a side-effect, then this is an attacker: ⟨Not statistically significant⟩; ∙ e is a non-randomised and non-blind trial, then this is an attacker: ⟨Non-randomized & non-blind trials⟩; ∙ e is a meta-analysis that concerns a narrow patient group, then this is an attacker: ⟨Meta-analysis for a narrow patient group⟩.
  • 125.
    a logic forclinical knowledge ID Left Right Indicator Risk ratio Outcome p e1 CP* NT† Pregnancy 0.05 superior 0.01 e2 CP NT Ovarian cancer 0.99 superior 0.07 e3 CP NT Breast cancer 1.04 inferior 0.01 e4 CP NT DVT 1.02 inferior 0.05 N.B.: Fictional data. *Contraceptive pill. †No Treatment.
  • 126.
    a logic forclinical knowledge .. ⟨{e1}, CP > NT⟩ . ⟨{e2}, CP > NT⟩ . ⟨{e1, e2}, CP > NT⟩. ⟨{e3}, CP < NT⟩ . ⟨{e4}, CP < NT⟩ . ⟨{e3, e4}, CP < NT⟩. ⟨Notstatistically significant⟩ ID Left Right Indicator Risk ratio Outcome p e1 CP NT Pregnancy 0.05 superior 0.01 e2 CP NT Ovarian cancer 0.99 superior 0.07 e3 CP NT Breast cancer 1.04 inferior 0.01 e4 CP NT DVT 1.02 inferior 0.05
  • 127.
  • 128.
  • 129.
  • 130.
    epistemic approach An epistemicprobability distribution* for an argumentation framework ∆ = ⟨A, → ⟩ is: P : A → [0, 1] Def. 65 For an argumentation framework AF = ⟨A, →⟩ and a probability assignment P, the epistemic extension is {a ∈ A | P(a) > 0.5} *In the tutorial a way to compute it for arguments based on classical deduction.
  • 131.
    epistemic approach COH: Pis coherent if for every a, b ∈ A, if a attacks b then P(a) ≤ 1 − P(b). SFOU: P is semi-founded if P(a) ≥ 0.5 for every unattacked a ∈ A. FOU: P is founded if P(a) = 1 for every unattacked a ∈ A. SOPT: P is semi-optimistic if P(a) ≥ 1 − ∑ b∈a− P(b) for every a ∈ A with at least one attacker. OPT: P is optimistic if P(a) ≥ 1 − ∑ b∈a− P(b) for every a ∈ A. JUS: P is justifiableif P is coherent and optimistic. TER: P is ternary if P(a) ∈ {0, 0.5, 1} for every a ∈ A. RAT: P is rational if for every a, b ∈ A, if a attacks b then P(a) > 0.5 implies P(b) ≤ 0.5. NEU: P is neutral if P(a) = 0.5 for every a ∈ A. INV: P is involutary if for every a, b ∈ A, if a attacks b, then P(a) = 1 − P(b). Let the event “a is accepted” be denoted as a, and let be Eac(S) = {a|a ∈ S}. Then P is weakly p-justifiable iff ∀a ∈ A, ∀b ∈ a− , P(a) ≤ 1 − P(b).
  • 132.
    epistemic approach Def. 67 Restrictionon complete* probability function P Classical semantics No restriction complete extensions No arguments a such that P(a) = 0.5 stable Maximal no. of a such that P(a) = 1 preferred Maximal no. of a such that P(a) = 0 preferred Maximal no. of a such that P(a) = 0.5 grounded Minimal no. of a such that P(a) = 1 grounded Minimal no. of a such that P(a) = 0 grounded Minimal no. of a such that P(a) = 0.5 semi-stable *Coherent, founded, and ternary. http://arxiv.org/abs/1405.3376
  • 133.
  • 134.
    structural approach P :{∆′ ⊑ ∆} → [0, 1] Subframework Probability ∆1 a ↔ b 0.09 ∆2 a 0.81 ∆3 b 0.01 ∆4 0.09 PGR({a, b}) = = 0.00 PGR({a}) = P(∆2) = 0.81 PGR({b}) = P(∆3) = 0.01 PGR({}) = P(∆1) + P(∆4) = 0.18
  • 135.
  • 136.
    a computational framework Convertto ASPIC+Argumentation System •Logical Language •Inference Rules •Contrariness Function •...... Structured Argumentation Framework (SAF) DAF DAFEAF Extended Evidential Framework (EEAF) Probabilistic Extended Evidential Framework Convert to Convert to Extended Evidential Framework (EEAF) Model Probabilistic Extended Evidential Framework Associate Probabilities Convert toPrEAF Associate Probabilities Semantics Preserved PrAF Associate Probabilities
  • 137.
  • 138.
  • 139.
  • 140.
    What is thecause of the illness?
  • 141.
    Analyst Joe ? Illness amongpeopleLivestock illness Possible Connection CONTAMINATED WATER SUPPLY Is this information credible? Are there alternative explanations? Is there evidence for the contamination of the water supply? Analysts must reason with different types of evidence
  • 142.
    research foci Attributes ofthe problem domain ∙ Intelligence analysis is critical for making well-informed decisions ∙ Large amount of conflicting incomplete information ∙ Reasoning with different types of evidence Research Question How to develop agents to support to reasoning with different types of evidence in a combined approach throughout the process of analysis?
  • 143.
    intelligence analysis Def. 73 Theapplication of individual and collective cognitive methods to evaluate, integrate, interpret information about situations and events to provide warning for potential threats or identify opportunities. External Data Sources Presentation Search and Filter Schematize Build Case Tell Story Reevaluate Search for support Search for evidence Search for information FORAGING LOOP SENSE-MAKING LOOP Structure Effort inf Shoebox Ev Ev EvEv Ev Ev Ev Ev Ev Ev Ev Evidence File Hyp1 Hyp2 Hypotheses Pirolli & Card Model Effective if: TIMELY TARGETED and TRUSTED
  • 144.
    collaboration among analysts Teamof Analysts: More effective, Prevent Bias, Different Expertise and Resources Challenges at different stages of analysis Schematize Build Case Search for support Search for evidence Shoebox Evidence File Hyp1 Hyp2 Hypotheses Share data and analysis Integrate and annotate Assess credibility inf inf inf Gather information Identify Plausible Hypotheses Mitigate Cognitive Biases
  • 145.
    cispaces agent support Interface Communicationlayer ToolBox WorkBoxInfoBox ReqBox ChatBox Sensemaking Agent Crowd-sourcing Agent Provenance Agent analyst CISpaces Interface: Working space and access to agent support Sensemaking Agent: Support collaborative analysis of arguments Crowd-sourcing Agent: Enable participation of large groups of contributors Provenance Agent: Assess the credibility of information
  • 146.
  • 147.
    sensemaking agent (smag)- analysis construction ∙ Annotation of Pro links; ∙ Suggests CQs (Con links) to prevent cognitive biases. Causal – Distribution of Activities ∙ Typically, if C occurs, then E will occur ∙ In this case, C occurs ⇒ Therefore, in this case E will occur Association – Element Connections ∙ An activity occurs, and an entity may be involved ∙ To perform the activity some property H is required ∙ The entity fits the property H ⇒ Therefore, the entity is associated with the activity
  • 148.
    smag analysis construction(cont.) E is an expert in D E asserts A in D A is true Lab Expert on water toxins and chemicals asserts There is a bacteria contaminating the water supply Water supply in Kish is contaminated Pro Expert Opinion Cause to effect Identification... V Analyst annotates pro links and nodes → match to an argument scheme. E is an expert in D Lab Expert on water toxins and chemicals asserts There is a bacteria contaminating the water supply Water supply in Kish is contaminated Expert Opinion E asserts A in D A is true ∙ E is an expert in domain D containing A, ∙ E asserts that A is true, ⇒ Therefore, A may plausibly be true. E is an expert in D Lab Expert on water toxins and chemicals asserts CQ1: Is E an expert in D? CQ2: Is E reliable?V Con CQ2 The expert is not reliable Analyst select CQs. CISpaces shows a negative answer to a CQ to prevent cognitive biases.
  • 149.
    smag hypotheses identification controversialstandpoints as extensions 1. Transforming the current workbox view into an argumentation framework q0,q1=>q2 PREMISE PREMISE q0 q1 q2 q3 q4 q1,q3=>q4 Contradictory statements: q2-q4 CAUSE TO EFFECT ASPIC+ argumentation framework: Premises: q0, q1, q3 Rules: q0, q1 ⇒ q2; q1, q3 ⇒ q4; Negation: q2 − q4 Arguments: A0 : q0, A1 : q1, A2 : q3 A4 : A0, A1 ⇒ q2 A5 : A0, A2 ⇒ q4 2. ASPIC+ /Dung’s AF implementation identifies the sets of acceptable arguments:
  • 150.
    smag hypotheses identification(cont.) 3. CISpaces shows what conclusions can be supported ∙ Labelled according to extensions computed ∙ Arguments shared through the Argument Interchange Format (AIF) Unidentified illness affects the local livestock in Kish Non- waterborne bacteria were engineered and released in the water supply Illness among young and elderly people in Kish caused by bacteria Toxic Bacteria contaminate the local water system in Kish NGO Lab reports examined the contamination NON- waterborne bacteria contaminate the water supply There are bacteria in the water supply Waterborne bacteria contaminate the water supply V V V V VWaterborne bacteria have formed by a sewage leakage in the water supply pipes V
  • 151.
  • 152.
    crowdsourcing agent 1. Criticalquestions trigger the need for further information on a topic 2. Analyst call the crowdsourcing agent (CWSAg) 3. CWSAg distributes the query to a large group of contributors 4. CWSAg aggregates the results and shows statistics to the analyst
  • 153.
    cwsag results import Q0-Answer Clear(Con) Q1-Answer 21.1 (Pro) Q0-AGAINST Water Contaminated Q1-FOR Water Contaminated CONTRADICTORY
  • 154.
  • 155.
    N S E W image info ij observation ObserverMessenger Informer message info ik Gang heading South Gang Crossing North Border N S E W Surveillance BORDER L1-L2 Image Processing Analyst Joe BORDER L1-L2 GP(ij) GP(ik)
  • 156.
    argument from provenance -Given a provenance chain GP(ij) of ij, information ij: - (Where?) was derived from an entity A - (Who?) was associated with actor AG - (What?) was generated by activity P1 - (How?) was informed by activity P2 - (Why?) was generated to satisfy goal X - (When?) was generated at time T - (Which?) was generated by using some entities A1,…, AN - where A, AG, P1, …belong to GP(ij) - the stated elements of GP(ij) infer that information ij is true, ⇒ Therefore, information ij may plausibly be taken to be true. CQA1: Is ij consistent with other information? CQA2: Is ij supported by evidence? CQA3: Does GP(ij) contain other elements that lead us not to believe ij? CQA4: Are there provenance elements that should have been included for believing ij?
  • 157.
    argument for provenancepreference - Given information ij and ik, - and their known parts of the provenance chains GP(ij) and GP(ik), - if there exists a criterion Ctr such that GP(ij) ≪Ctr GP(ik), then ij ≪ ik - a criterion Ctr′ leads to assert that GP(ij) ≪Ctr′ GP(ik) ⇒ Therefore, ik should be preferred to ij. Trustworthiness Reliability Timeliness Shortest path CQB1: Does a different criterion Ctr1, such that GP(ij) ≫Ctr1 GP(ik) lead ij ≪ ik not being valid? CQB2: Is there any exception to criterion Ctr such that even if a provenance chain GP(ik) is preferred to GP(ij), information ik is not preferred to information ij? CQB3: Is there any other reason for believing that the preference ij ≪ ik is not valid?
  • 158.
    pvag provenance analysis& import IMPORT ANALYSIS Primary Source Pattern Provenance Explanation US Patrol Report Extract Used wasGeneratedBy US Team Patrol wasAssociatedWith wasDerivedFrom INFO: Livestock illness prov: time 2015-04-27T02:27:40Z Farm Daily Report Prepare Used wasGeneratedBy Kish Farmer wasAssociatedWith wasDerivedFrom type PrimarySource Annotate wasGeneratedBy wasAssociatedWith Livestock Pictures Used Livestock Information IMPORT OF PREFERENCES?
  • 159.
    theories/technologies integrated ∙ Argumentrepresentation: ∙ Argument Schemes and Critical questions (domain specific) ∙ „Bipolar-like” graph for user consumption ∙ AIF (extension for provenance) ∙ ASPIC(+) ∙ Arguments based on preferences (partially under development) ∙ Theoretical framework for acceptability status: ∙ AF ∙ PrAF (case study for [Li15]) ∙ AFRA for preference handling (under development) ∙ Computational machinery: jArgSemSAT
  • 160.
  • 161.
  • 162.
  • 163.
  • 164.
  • 165.
    csp-based approach A ConstraintSatisfaction Problem (CSP) P is a triple P = ⟨X, D, C⟩ such that: ∙ X = ⟨x1, . . . , xn⟩ is a tuple of variables; ∙ D = ⟨D1, . . . , Dn⟩ a tuple of domains such that ∀i, xi ∈ Di; ∙ C = ⟨C1, . . . , Ct⟩ is a tuple of constraints, where ∀j, Cj = ⟨RSj , Sj⟩, Sj ⊆ {xi|xi is a variable}, RSj ⊆ SD j × SD j where SD j = {Di|Di is a domain, and xi ∈ Sj}. A solution to the CSP P is A = ⟨a1, . . . , an⟩ where ∀i, ai ∈ Di and ∀j, RSj holds on the projection of A onto the scope Sj. If the set of solutions is empty, the CSP is unsatisfiable.
  • 166.
    csp-based approach Given anAF: 1. create a variable for each argument whose domain is always {0, 1} — ∀ai ∈ A, ∃xi ∈ X such that Di = {0, 1}; 2. describe constraints associated to different definitions of Dung’s argumentation framework: e.g. {a1, a2} ⊆ A is D-conflict-free iff ¬(x1 = 1 ∧ x2 = 1); 3. solve the CSP problem.
  • 167.
    asp-based approach ASPARTIX-D /ASPARTIX-V / DIAMOND [EGW10] [Dvo+11]
  • 168.
    asp-based approach πST ={ in(X) ← not out(X), arg(X); out(X) ← not in(X), arg(X); ← in(X), in(Y), defeat(X, Y); defeated(X) ← in(Y), defeat(Y, X); ← out(X), not defeated(X)}. Tests for subset-maximality exploit the metasp optimisation frontend for the ASP-package gringo/claspD.
  • 169.
  • 170.
    sat-based approaches [Dvo+12] ∧ a→b (¬xa ∨¬xb)∧ ∧ b→c  ¬xc ∨ ∨ a→b xa  
  • 171.
    sat-based approaches [Cer+14b] If a1is not attacked, Lab(a1) = in C1 Lab(a1) = in ⇔ ∀a2 ∈ a− 1 Lab(a2) = out Lab(a1) = out ⇔ ∃a2 ∈ a− 1 : Lab(a2) = in Lab(a1) = undec ⇔ ∀a2 ∈ a− 1 Lab(a2) ̸= in ∧ ∃a3 ∈ a− 1 : Lab(a3) = undec
  • 172.
    sat-based approaches [Cer+14b] ∧ i∈{1,...,k} ( (Ii ∨Oi ∨ Ui) ∧ (¬Ii ∨ ¬Oi)∧(¬Ii ∨ ¬Ui) ∧ (¬Oi ∨ ¬Ui) ) ∧ ∧ {i|ϕ(i)−=∅} (Ii ∧ ¬Oi ∧ ¬Ui) ∧ ∧ {i|ϕ(i)−̸=∅}   Ii ∨    ∨ {j|ϕ(j)→ϕ(i)} (¬Oj)       ∧ ∧ {i|ϕ(i)−̸=∅}    ∧ {j|ϕ(j)→ϕ(i)} ¬Ii ∨ Oj    ∧ ∧ {i|ϕ(i)−̸=∅}    ∧ {j|ϕ(j)→ϕ(i)} ¬Ij ∨ Oi    ∧ ∧ {i|ϕ(i)−̸=∅}   ¬Oi ∨    ∨ {j|ϕ(j)→ϕ(i)} Ij       ∧ ∧ {i|ϕ(i)−̸=∅}    ∧ {k|ϕ(k)→ϕ(i)}   Ui ∨ ¬Uk ∨    ∨ {j|ϕ(j)→ϕ(i)} Ij          ∧ ∧ {i|ϕ(i)−̸=∅}       ∧ {j|ϕ(j)→ϕ(i)} (¬Ui ∨ ¬Ij)    ∧   ¬Ui ∨    ∨ {j|ϕ(j)→ϕ(i)} Uj         
  • 173.
    sat-based approaches [Cer+14b] If a1is not attacked, Lab(a1) = in Ca 1 Lab(a1) = in ⇔ ∀a2 ∈ a− 1 Lab(a2) = out Lab(a1) = out ⇔ ∃a2 ∈ a− 1 : Lab(a2) = in
  • 174.
    sat-based approaches [Cer+14b] If a1is not attacked, Lab(a1) = in Cb 1 Lab(a1) = out ⇔ ∃a2 ∈ a− 1 : Lab(a2) = in Lab(a1) = undec ⇔ ∀a2 ∈ a− 1 Lab(a2) ̸= in ∧ ∃a3 ∈ a− 1 : Lab(a3) = undec
  • 175.
    sat-based approaches [Cer+14b] If a1is not attacked, Lab(a1) = in Cc 1 Lab(a1) = in ⇔ ∀a2 ∈ a− 1 Lab(a2) = out Lab(a1) = undec ⇔ ∀a2 ∈ a− 1 Lab(a2) ̸= in ∧ ∃a3 ∈ a− 1 : Lab(a3) = undec
  • 176.
    sat-based approaches [Cer+14b] If a1is not attacked, Lab(a1) = in C2 Lab(a1) = in ⇒ ∀a2 ∈ a− 1 Lab(a2) = out Lab(a1) = out ⇒ ∃a2 ∈ a− 1 : Lab(a2) = in Lab(a1) = undec ⇒ ∀a2 ∈ a− 1 Lab(a2) ̸= in ∧ ∃a3 ∈ a− 1 : Lab(a3) = undec
  • 177.
    sat-based approaches [Cer+14b] If a1is not attacked, Lab(a1) = in C3 Lab(a1) = in ⇐ ∀a2 ∈ a− 1 Lab(a2) = out Lab(a1) = out ⇐ ∃a2 ∈ a− 1 : Lab(a2) = in Lab(a1) = undec ⇐ ∀a2 ∈ a− 1 Lab(a2) ̸= in ∧ ∃a3 ∈ a− 1 : Lab(a3) = undec
  • 178.
    sat-based approaches [Cer+14b] 50 60 70 80 90 100 50 100150 200IPCnormalisedto100 Number of arguments IPC normalised to 100 with respect to the number of arguments C1 Ca 1 Cb 1 Cc 1 C2 C3
  • 179.
    iccma 2015 The FirstInternational Competition on Computational Models of Argumentation http://argumentationcompetition.org/ Results announced yesterday
  • 180.
  • 181.
  • 182.
  • 183.
    a parallel algorithm [BGG05][Cer+14a] [Cer+15]
  • 184.
  • 185.
  • 186.
  • 187.
  • 188.
    a parallel algorithm ab e f c d g h Level 1 Level 2
  • 189.
    a parallel algorithm ab e f c d g h Level 1 Level 2    ⟨{a, c, e}, {b, d, f}, {}⟩, ⟨{a, c, f}, {b, d, e}, {}⟩, ⟨{a, d, e}, {b, c, f}, {}⟩, ⟨{a, d, f}, {b, c, e}, {}⟩   
  • 190.
    a parallel algorithm ab e f c d g h Level 1 Level 2 Moving to the the last level, B1: no argument in S3 is attacked from “outside” for Lab ∈ { ⟨{a, c, e}, {b, d, f}, {}⟩, ⟨{a, c, f}, {b, d, e}, {}⟩ } B2: g is attacked by d Lab ∈ { ⟨{a, d, e}, {b, c, f}, {}⟩, ⟨{a, d, f}, {b, c, e}, {}⟩ } Cases B1 and B2 are computed in parallel.
  • 191.
    a parallel algorithm ab e f c d g h Level 1 Level 2    ⟨{a, c, e, g}, {b, d, f, h}, {}⟩, ⟨{a, c, e, h}, {b, d, f, g}, {}⟩, ⟨{a, c, f, g}, {b, d, e, h}, {}⟩, ⟨{a, c, f, h}, {b, d, e, g}, {}⟩, ⟨{a, d, e, h}, {b, c, f, g}, {}⟩, ⟨{a, d, f, h}, {b, c, e, g}, {}⟩   
  • 192.
    We need tobe smart Holger H. Hoos, Invited Keynote Talk at ECAI2014
  • 193.
  • 194.
    features from anargumentation graph Directed Graph (26 features) Structure: # vertices ( |A| ) # edges ( | → | ) # vertices / #edges ( |A|/| → | ) # edges / #vertices ( | → |/|A| ) density average Degree: stdev attackers max min # average stdev max SCCs: min Structure: # self-def # unattacked flow hierarchy Eulerian aperiodic CPU-time: … Undirected Graph (24 features) Structure: # edges # vertices / #edges # edges / #vertices density Degree: average stdev max min SCCs: # average stdev max min Structure: Transitivity 3-cycles: # average stdev max min CPU-time: …
  • 195.
    how hard isto get the features? Direct Graph Features (DG) Undirect Graph Features (UG) Class CPU-Time # feat Class CPU-Time # feat Mean stdDev Mean stDev Graph Size 0.001 0.009 5 Graph Size 0.001 0.003 4 Degree 0.003 0.009 4 Degree 0.002 0.004 4 SCC 0.046 0.036 5 Components 0.011 0.009 5 Structure 2.304 2.868 5 Structure 0.799 0.684 1 Triangles 0.787 0.671 5 Average CPU-time, stdev, needed for extracting the features of a given class.
  • 196.
    protocol: some numbers ∙|SCCS∆| in 1 : 100; ∙ |A| in 10 : 5, 000; ∙ | → | in 25 : 270, 000 (Erdös-Rényi, p uniformly distributed) ; ∙ Overall 10, 000 AFs. ∙ Cutoff time of 900 seconds (value also for crashed, timed-out or ran out of memory). ∙ EPMs both for Regression (Random forests) and Classification (M5-Rules) using WEKA; ∙ Evaluation using a 10-fold cross-validation approach on a uniform random permutation of instances.
  • 197.
    result 1: bestfeatures for prediction Solver B1 B2 B3 AspartixM number of arguments density of directed graph size of max. SCC PrefSAT density of directed graph number of SCCs aperiodicity NAD-Alg density of directed graph CPU-time for density CPU-time for Eulerian SSCp density of directed graph number of SCCs size of the max SCC Determined by a greedy forward search based on the Correlation-based Feature Selection (CFS) attribute evaluator. AF structure SCCs CPU-time for feature extraction
  • 198.
    result 2: predicting(log)runtime RSME of Regression (Lower is better) B1 B2 B3 DG UG SCC All AspartixM 0.66 0.49 0.49 0.48 0.49 0.52 0.48 PrefSAT 1.39 0.93 0.93 0.89 0.92 0.94 0.89 NAD-Alg 1.48 1.47 1.47 0.77 0.57 1.61 0.55 SSCp 1.36 0.80 0.78 0.75 0.75 0.79 0.74 ∑n i=1 ( log10( ti ) − log10( yi ) )2 n AF structure SCCs CPU-time for feature extraction Undirect Graph
  • 199.
    result 3: bestfeatures for classification C-B1 C-B2 C-B3 number of arguments density of directed graph min attackers Determined by a greedy forward search based on the Correlation-based Feature Selection (CFS) attribute evaluator. AF structure Attackers
  • 200.
    result 4: classification,i.e. selecting the best solver for a given af Classification (Higher is better) |A| density min attackers DG UG SCC All Accuracy 48.5% 70.1% 69.9% 78.9% 79.0% 55.3% 79.5% Prec. AspartixM 35.0% 64.6% 63.7% 74.5% 74.9% 42.2% 76.1% Prec. PrefSAT 53.7% 67.8% 68.1% 79.6% 80.5% 60.4% 80.1% Prec. NAD-Alg 26.5% 69.2% 69.0% 81.7% 85.1% 35.3% 86.0% Prec. SSCp 54.3% 73.0% 72.7% 76.6% 76.8% 57.8% 77.2% AF structure Attackers Undirect Graph SCCs
  • 201.
    result 5: algorithmselection Metric: Fastest (max. 1007) AspartixM 106 NAD-Alg 170 PrefSAT 278 SSCp 453 EPMs Regression 755 EPMs Classification 788 Metric: IPC* (max. 1007) NAD-Alg 210.1 AspartixM 288.3 PrefSAT 546.7 SSCp 662.4 EPMs Regression 887.7 EPMs Classification 928.1 *Scale of (log)relative performance
  • 202.
  • 203.
    belief revision andargumentation [FKS09] [FGS13]
  • 204.
    belief revision andargumentation Potential cross-fertilisation Argumentation in Belief Revision ∙ Justification-based truth maintenance system ∙ Assumption-based truth maintenance system Some conceptual differences: in revision, external beliefs are compared with internal beliefs and, after a selection process, some sentences are discarded, other ones are accepted. [FKS09] Belief Revision in Argumentation ∙ Changing by adding or deleting an argument. ∙ Changing by adding or deleting a set of arguments. ∙ Changing the attack (and/or defeat) relation among arguments. ∙ Changing the status of beliefs (as conclusions of arguments). ∙ Changing the type of an argument (from strict to defeasible, or vice versa).
  • 205.
  • 206.
    abstract dialectical framework DependencyGraph + Acceptance Conditions
  • 207.
    argumentation and socialnetworks [LM11] [ET13]
  • 208.
    argumentation and socialnetworks a:The Wonder-Phone is the best new generation phone. +20 -20 b: No, the Magic-Phone is the best new generation phone. + 20 - 20 c: here is a [link] to a review of the Magic-Phone giving poor scores due to bad battery performance +60 -10. d: author of c is ignorant, since subsequent reviews noted that only one of the first editions had such problems: [links]. +10 -40 e: d is wrong. I found out c) knows about that but withheld the information. Here's a [link] to another thread proving it! +40 -10
  • 209.
    argumentation and socialnetworks a:The Wonder-Phone is the best new generation phone. +20 -20 b: No, the Magic-Phone is the best new generation phone. + 20 - 20 c: here is a [link] to a review of the Magic-Phone giving poor scores due to bad battery performance +60 -10. d: author of c is ignorant, since subsequent reviews noted that only one of the first editions had such problems: [links]. +10 -40 e: d is wrong. I found out c) knows about that but withheld the information. Here's a [link] to another thread proving it! +40 -10
  • 210.
    argumentation and socialnetworks a:The Wonder-Phone is the best new generation phone. +20 -20 b: No, the Magic-Phone is the best new generation phone. + 20 - 20 c: here is a [link] to a review of the Magic- Phone giving poor scores due to bad battery performance +60 -10. d: author of c is ignorant, since subsequent reviews noted that only one of the first editions had such problems: [links]. +10 -40 e: d is wrong. I found out c) knows about that but withheld the information. Here's a [link] to another thread proving it! +40 -10
  • 211.
    argumentation and socialnetworks http://www.quaestio-it.com/
  • 212.
  • 213.
  • 214.
  • 215.
    natural language interfaces a1: σA ⇒ γ a2 : σB ⇒ ¬γ a3 : ⇒ a1 a2 First Scenario a1: Alice suggests to move in together with Jane a2: Stacy suggests otherwise because Jane might have a hidden agenda a3: Stacy is your best friend a1 a2 don’t know % agreement 12.5 68.8 18.8 • • • • • Second Scenario a1: TV1 suggests that tomorrow will rain a2: TV2 suggests that tomorrow will be cloudy but will not rain a3: TV2 is generally more accurate than TV1 a1 a2 don’t know % agreement 5.0 50.0 45.0
  • 216.
    natural language interfaces ScrutableAutonomous Systems (in particular from 7’ 30”)
  • 217.
  • 218.
  • 219.
  • 220.
    credits Template adapted from mthemehttps://github.com/matze/mtheme