Navigation in flutter – how to add stack, tab, and drawer navigators to your ...
Slides of STAIRS 2020
1. Introduction Background What has been done so far What still needs to be done
Towards an Explainable Argumentation-based Agent
Mariela Morveli Espinoza and Cesar Augusto Tacla
Program in Electrical and Computer Engineering
Federal University of Technology of Parana
Curitiba - Brazil
August 30 2020
Morveli-Espinoza and Tacla CPGEI-UTFPR Towards an Explainable Argumentation-based Agent August 30 2020 1 / 14
2. Introduction Background What has been done so far What still needs to be done
Outline
1 Introduction
Motivation
Problem
Proposal Overview
2 Background
Argumentation
3 What has been done so far
Formalization of the BBGP model
Generating the explanations
4 What still needs to be done
Morveli-Espinoza and Tacla CPGEI-UTFPR Towards an Explainable Argumentation-based Agent August 30 2020 2 / 14
3. Introduction Background What has been done so far What still needs to be done
Motivation
Motivating Example : Rescue robots
Some goals of a rescue robot in a scenario of a natural
disaster
Wander the area searching people needing help,
Take severely injured people to the hospital,
Send healthy people to the shelter
...
When the robot finds a person, it has to decide what goal to pursue based on its
perceptions (beliefs).
After the rescue work, the robots can be asked for an explanation of why a
wounded person was sent to the shelter instead of taking him/her to the hospital,
or why the robot decided to take to the hospital a person x first, instead of taking
another person y.
Therefore...
It is important to endow the agents (maybe robots) with the ability of
explaining their decisions about the goals they pursued or are pursuing.
Morveli-Espinoza and Tacla CPGEI-UTFPR Towards an Explainable Argumentation-based Agent August 30 2020 3 / 14
4. Introduction Background What has been done so far What still needs to be done
Problem
Problem
BDI Agents
Beliefs about itself, other agents, and its environment
Desires about future states
Intentions about its own future actions
Limitations
BDI agents there are only two stages in the intention formation process. This
means that there is a lack of a fine-grained analysis of this process, which may
improve and enrich the informational quality of the explanations.
BDI agents are not endowed with explainability abilities.
Research Questions
1 How to improve the analysis of the intention formation process?
2 How can explanations be generated by BDI (or extended) agents?
Morveli-Espinoza and Tacla CPGEI-UTFPR Towards an Explainable Argumentation-based Agent August 30 2020 4 / 14
5. Introduction Background What has been done so far What still needs to be done
Proposal Overview
Proposal Overview
1 An extended model for intention formation has been proposed by Castelfranchi
and Paglieri (2007), which was named the Belief-based Goal Processing model
(let us denote it by BBGP model). The BBGP model has four stages :
activation
evaluation
deliberation
checking
Four different statuses for a goal are defined :
active (=desire)
pursuable
chosen (= future-directed intention)
executive (= present-directed intention).
2 Argumentation-based approach. In the intention formation process, arguments
can represent reasons for a goal to change (or not) its status.
Morveli-Espinoza and Tacla CPGEI-UTFPR Towards an Explainable Argumentation-based Agent August 30 2020 5 / 14
6. Introduction Background What has been done so far What still needs to be done
Argumentation
Abstract Argumentation [Dung, 1995]
In abstract argumentation frameworks (AFs) statements (called arguments) are
formulated together with a relation (attack) between them.
The conflicts between the arguments are conflicts by means of argumentation
semantics.
Morveli-Espinoza and Tacla CPGEI-UTFPR Towards an Explainable Argumentation-based Agent August 30 2020 6 / 14
7. Introduction Background What has been done so far What still needs to be done
Argumentation
Abstract Argumentation [Dung, 1995]
In abstract argumentation frameworks (AFs) statements (called arguments) are
formulated together with a relation (attack) between them.
The conflicts between the arguments are conflicts by means of argumentation
semantics.
Morveli-Espinoza and Tacla CPGEI-UTFPR Towards an Explainable Argumentation-based Agent August 30 2020 7 / 14
8. Introduction Background What has been done so far What still needs to be done
What has been done so far
Morveli-Espinoza and Tacla CPGEI-UTFPR Towards an Explainable Argumentation-based Agent August 30 2020 8 / 14
9. Introduction Background What has been done so far What still needs to be done
Formalization of the BBGP model
Formalization of the BBGP Model [Morveli−Espinoza et al., 2019(a)]
ACTIVATION
STAGE
Sleeping
goals
Active
Activation arguments
CHECKING
STAGE
Chosen
Executive goals
or Intentions
Checking arguments
EVALUATION
STAGE
Active goals or
Desires
Pursuable
goals
Evaluation arguments
DELIBERATION
STAGE
goals
Chosen goals
Deliberation arguments
You can find our simulator at :
https ://github.com/henriquermonteiro/BBGP-Agent-Simulator/
Morveli-Espinoza and Tacla CPGEI-UTFPR Towards an Explainable Argumentation-based Agent August 30 2020 9 / 14
10. Introduction Background What has been done so far What still needs to be done
Generating the explanations
Rescue robots scenario : Partial and Complete Explanations
[Morveli−Espinoza et al., 2019(b)]
Why did you take Tom to the hospital?
C F B
A
C F B
A
E
I HG
E
I HG
FIGURE – Partial explanation
.Partial explanation in natural language
Tom had a fractured bone (G), which was
in his arm (E), and it was an open fracture
(C); therefore, he was severely injured
(B,H). Since he was severely injured I took
him to the hospital (A,I).
C F B
A
C F B
A
E
I HG
E
I HG
FIGURE – Complete explanation
.Complete explanation in natural language
Tom had a fractured bone (G), which was
in his arm (E). Given that he had a
fractured bone, he might be considered
severe injured (H); however, since such
fracture was of his arm, it might not be
considered a severe injure (F). Finally, I
noted that it was an open fracture (C),
which determines – without exception –
that it was a severe injury (B). For these
reasons I took him to the hospital (A,B).
Morveli-Espinoza and Tacla CPGEI-UTFPR Towards an Explainable Argumentation-based Agent August 30 2020 10 / 14
11. Introduction Background What has been done so far What still needs to be done
What still needs to be done
Morveli-Espinoza and Tacla CPGEI-UTFPR Towards an Explainable Argumentation-based Agent August 30 2020 11 / 14
12. Introduction Background What has been done so far What still needs to be done
What still needs to be done
We have focused on achievement (or procedural) goals; however, maintenance
and declarative goals are also considered in the evaluation stage of the BBGP
model.
In Morveli-Espinoza et al. (2019)(c), we have took into account uncertainty for
identifying and dealing with incompatibilities among goals. It would be interesting
to also consider uncertainty in the elements of activation, evaluation, deliberation
rules rules. How would this impact on the results? How would this impact on the
explainability skills of agents?
Regarding the generated explanations, there is still a lot to work to do. We have
proposed two kinds of explanations; however, it is necessary to study how to deal
with complex questions, which require more elaborate and adequate
explanations. In this sense, a “good” explanation may include elements from
different AFs, which elements? how to organize them? What else should be taken
into account for generating an explanation?
Morveli-Espinoza and Tacla CPGEI-UTFPR Towards an Explainable Argumentation-based Agent August 30 2020 12 / 14
13. Introduction Background What has been done so far What still needs to be done
References
P. M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic
reasoning, logic programming and n-person games, Artificial intelligence, vol. 77, no. 2, pp.
321–357, 1995.
C. Castelfranchi and F. Paglieri, The role of beliefs in goal dynamics : Prolegomena to a
constructive theory of intentions, Synthese, vol. 155, no. 2, pp. 237–263, 2007.
M. Morveli-Espinoza, A. T. Possebom, J. Puyol-Gruart, and C. A. Tacla, Argumentation-based
intention formation process, DYNA, vol. 86, no. 208, pp. 82–91, 2019.
M. Morveli-Espinoza, A. Possebom, and C. A. Tacla, Argumentation-Based Agents that
Explain Their Decisions. In proceedings of the 8th Brazilian Conference on Intelligent Systems
(BRACIS), pp. 467-472, 2019.
M. Morveli Espinoza, J.C. Nieves, A. Possebom, A., and C. A. Tacla, Dealing with
Incompatibilities among Procedural Goals under Uncertainty. Inteligencia Artificial.
Ibero-American Journal of Artificial Intelligence, 22(64), 2019.
Morveli-Espinoza and Tacla CPGEI-UTFPR Towards an Explainable Argumentation-based Agent August 30 2020 13 / 14
14. Introduction Background What has been done so far What still needs to be done
Thank you!
Questions?
Send me an e-mail to morveli.espinoza@gmail.com
Morveli-Espinoza and Tacla CPGEI-UTFPR Towards an Explainable Argumentation-based Agent August 30 2020 14 / 14