BUILDING INTELLIGENT SYSTEMS
(THAT CAN EXPLAIN)
Ilaria Tiddi
Faculty of Computer Science && Faculty of Behavioural Sciences
Vrije Universiteit Amsterdam
@IlaTiddi
DISCLAIMER
This is not a presentation on eXplainable AI (XAI)
...but rather on systems using data to making sense of other data
● Why
● What
● Which
● How
● Examples
● Lessons learnt
GENERATING EXPLANATIONS
Why do we need (systems generating) explanations?
● to learn new knowledge
● to find meaning (reconciling contradictions in our knowledge)
● to socially interact (creating a shared meaning with the others)
● ...and because GDPR says so
Users have a “right to explanation”
for any decision made about them
EXPLANATIONS: WHY?
Different disciplines, common features [1]:
● Generation of coherence between old and new knowledge
● Same elements (theory, anterior, posterior, circumstances)
● Same processes (psychological , linguistic)
[1] Tiddi et al. (2015), An Ontology Design Pattern to Define Explanations, K-CAP2015.
Determinists Hempel&
Oppenheim
Weber&
Durkheim
Charles
Peirce
EXPLANATIONS: WHAT/1
V-IV BC
Plato&Aristotle
XVII AC 1948 19641903 2015
?
Explication =
Justification =
Explic-/Interpret-/Explainability =
EXPLANATIONS: WHAT/2
Explanation (⋍ Interpretation)
why a decision is good
the degree to which an observer
can understand the cause of a
decision
Which types?
● factual : why specific ‘everyday’ events occur
● scientific : generalising scientific theories
● behavioural : explaining behaviour and decision making
Which processes?
● cognitive : determining the causes (explanans) of an event (explanandum) and
relating these to a particular context
● social : transferring knowledge between explainer and explainee
EXPLANATIONS: WHICH?
Which audience?
● engineers/scientists/experts
● end-users
Which characteristics?
● Transparency (traceability + verificability)
● Intelligibility + clarity
EXPLANATIONS: WHICH?
Which language?
● Visual
● Written
● Spoken
Reuse!! Existing knowledge sources serve as background knowledge (the
“old”) to generate explanations (the “new”):
● Plenty of available sources (KGs, datahubs, open data...)
● Connected, centralised hubs
● Multi-domain, allowing serendipity
EXPLANATIONS: HOW?
Some examples
[2] Tiddi. (2016), Explaining Data Patterns using Knowledge from the Web of Data, Ph.D. thesis.
Demo: http://dedalo.kmi.open.ac.uk/
Explaining web searches
using the Linked Data Cloud
Why do people search for “A Song of Ice and
Fire” only in certain periods?
EXPLAINING DATA PATTERNS
Explaining user online activities
with Wikidata, recommending
Open University courses
[3] http://afel-project.eu
EXPLAINING BEHAVIOURS
Using identity links to find:
● The NYT dataset is about places in
the US (trivial)
● The Reading Experience Dataset is
about poets/novelists which
committed suicide (less trivial)
[4] Tiddi. (2014), Quantifying the bias in data links (EKAW201 4)
owl:sameAs
skos:exactMatch
...
A
B
Projection of B in A
EXPLAINING BIAS IN DATASETS
Using open data (DBpedia,
MK:DataHub) to enhance
smart-city applications
[5] Tiddi et al. (2018), Allowing exploratory search from podcasts: the case of Secklow Sounds Radio (ISWC2018)
EXPLAINING RADIO CONTENTS
Semantic mapping with
ShapeNet and ConceptNet
DBpedia ConceptNet ShapeNet
EXPLAINING SCENES IN MOTION
[6] Chiatti et al., Task-agnostic, ShapeNet-based Object Recognition for Mobile Robots, DARLI-AP 2019 (EDBT/ICDT 2019)
Explaining and rebalancing
LSTM networks using linguistic
corpora (e.g. FrameNet)
[7] Mensio et al., Towards Explainable Language Understanding for Human Robot Interaction
EXPLAINING NEURAL ATTENTIONS
Cooperation Databank : 50
years of scientific studies on
human cooperation
Scholarly KGs (e.g. Scigraph) to
support systematic
reviews/meta-analyses
[8] https://amsterdamcooperationlab.com/databank/
EXPLAINING SCIENTIFIC RESEARCH
Bringing together social and
computer scientists
Reflect on the threats and
misuse of our technologies
[9] https://kmitd.github.io/recoding-black-mirror/
EXPLAINING ETHICS TO MACHINES?
Sharing and reusing is the key to explainable systems
● Lots of data
● Lots of theories (e.g. insights from the social/cognitive sciences [10])
(My) desiderata:
+ cross-disciplinary discussions
+ formalised common-sense knowledge (Web of entities, Web of actions)
+ links between data, allow serendipitous knowledge discovery
SOME TAKEAWAYS
[10] Tim Miller (2018), Explanations in artificial intelligence: Insights from the social sciences, Artificial Intelligence.
Thank you
...and all of them!
@IlaTiddi
i.tiddi@vu.nl
kmitd.github.io/ilaria

Building intelligent systems (that can explain)

  • 1.
    BUILDING INTELLIGENT SYSTEMS (THATCAN EXPLAIN) Ilaria Tiddi Faculty of Computer Science && Faculty of Behavioural Sciences Vrije Universiteit Amsterdam @IlaTiddi
  • 2.
    DISCLAIMER This is nota presentation on eXplainable AI (XAI) ...but rather on systems using data to making sense of other data
  • 3.
    ● Why ● What ●Which ● How ● Examples ● Lessons learnt GENERATING EXPLANATIONS
  • 4.
    Why do weneed (systems generating) explanations? ● to learn new knowledge ● to find meaning (reconciling contradictions in our knowledge) ● to socially interact (creating a shared meaning with the others) ● ...and because GDPR says so Users have a “right to explanation” for any decision made about them EXPLANATIONS: WHY?
  • 5.
    Different disciplines, commonfeatures [1]: ● Generation of coherence between old and new knowledge ● Same elements (theory, anterior, posterior, circumstances) ● Same processes (psychological , linguistic) [1] Tiddi et al. (2015), An Ontology Design Pattern to Define Explanations, K-CAP2015. Determinists Hempel& Oppenheim Weber& Durkheim Charles Peirce EXPLANATIONS: WHAT/1 V-IV BC Plato&Aristotle XVII AC 1948 19641903 2015 ?
  • 6.
    Explication = Justification = Explic-/Interpret-/Explainability= EXPLANATIONS: WHAT/2 Explanation (⋍ Interpretation) why a decision is good the degree to which an observer can understand the cause of a decision
  • 7.
    Which types? ● factual: why specific ‘everyday’ events occur ● scientific : generalising scientific theories ● behavioural : explaining behaviour and decision making Which processes? ● cognitive : determining the causes (explanans) of an event (explanandum) and relating these to a particular context ● social : transferring knowledge between explainer and explainee EXPLANATIONS: WHICH?
  • 8.
    Which audience? ● engineers/scientists/experts ●end-users Which characteristics? ● Transparency (traceability + verificability) ● Intelligibility + clarity EXPLANATIONS: WHICH? Which language? ● Visual ● Written ● Spoken
  • 9.
    Reuse!! Existing knowledgesources serve as background knowledge (the “old”) to generate explanations (the “new”): ● Plenty of available sources (KGs, datahubs, open data...) ● Connected, centralised hubs ● Multi-domain, allowing serendipity EXPLANATIONS: HOW?
  • 10.
  • 11.
    [2] Tiddi. (2016),Explaining Data Patterns using Knowledge from the Web of Data, Ph.D. thesis. Demo: http://dedalo.kmi.open.ac.uk/ Explaining web searches using the Linked Data Cloud Why do people search for “A Song of Ice and Fire” only in certain periods? EXPLAINING DATA PATTERNS
  • 12.
    Explaining user onlineactivities with Wikidata, recommending Open University courses [3] http://afel-project.eu EXPLAINING BEHAVIOURS
  • 13.
    Using identity linksto find: ● The NYT dataset is about places in the US (trivial) ● The Reading Experience Dataset is about poets/novelists which committed suicide (less trivial) [4] Tiddi. (2014), Quantifying the bias in data links (EKAW201 4) owl:sameAs skos:exactMatch ... A B Projection of B in A EXPLAINING BIAS IN DATASETS
  • 14.
    Using open data(DBpedia, MK:DataHub) to enhance smart-city applications [5] Tiddi et al. (2018), Allowing exploratory search from podcasts: the case of Secklow Sounds Radio (ISWC2018) EXPLAINING RADIO CONTENTS
  • 15.
    Semantic mapping with ShapeNetand ConceptNet DBpedia ConceptNet ShapeNet EXPLAINING SCENES IN MOTION [6] Chiatti et al., Task-agnostic, ShapeNet-based Object Recognition for Mobile Robots, DARLI-AP 2019 (EDBT/ICDT 2019)
  • 16.
    Explaining and rebalancing LSTMnetworks using linguistic corpora (e.g. FrameNet) [7] Mensio et al., Towards Explainable Language Understanding for Human Robot Interaction EXPLAINING NEURAL ATTENTIONS
  • 17.
    Cooperation Databank :50 years of scientific studies on human cooperation Scholarly KGs (e.g. Scigraph) to support systematic reviews/meta-analyses [8] https://amsterdamcooperationlab.com/databank/ EXPLAINING SCIENTIFIC RESEARCH
  • 18.
    Bringing together socialand computer scientists Reflect on the threats and misuse of our technologies [9] https://kmitd.github.io/recoding-black-mirror/ EXPLAINING ETHICS TO MACHINES?
  • 19.
    Sharing and reusingis the key to explainable systems ● Lots of data ● Lots of theories (e.g. insights from the social/cognitive sciences [10]) (My) desiderata: + cross-disciplinary discussions + formalised common-sense knowledge (Web of entities, Web of actions) + links between data, allow serendipitous knowledge discovery SOME TAKEAWAYS [10] Tim Miller (2018), Explanations in artificial intelligence: Insights from the social sciences, Artificial Intelligence.
  • 20.
    Thank you ...and allof them! @IlaTiddi i.tiddi@vu.nl kmitd.github.io/ilaria