Recent advances in artificial intelligence raise a number of concerns. Among the challenges to be addressed by researchers, accountability of artificial intelligence solutions is one of the most critical. This paper focuses on artificial intelligence applications using natural language to investigate if the core semantics defined for a large-scale natural language processing system could assist in addressing accountability issues. Core semantics aims to obtain a full interpretation of the content of natural language texts, representing both implicit and explicit knowledge, using only ‘subj-action-(obj)’ structures and causal, temporal, spatial and personal-world links. The first part of the paper offers a summary of the difficulties to be addressed and of the reasons why representing the meaning of a natural language text is relevant for artificial intelligence accountability. In the second part, a-proof-of-concept for the application of such a knowledge representation to support accountability, and a detailed example of the analysis obtained with a prototype system named CoreSystem is illustrated. While only preliminary, these results give some new insights and indicate that the provided knowledge representation can be used to support accountability, looking inside the box.
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
Accountability of Artificial Intelligence
1. Looking inside the Black Box:
Core Semantics towards Accountability
of Artificial Intelligence
Roberto Garigliano, Luisa Mich
October 8th, 2019 - SG65: Colloquium in honour of Stefania Gnesi, Porto, Portugal
3. AI comes with
challenges
and risks
L. Mich – October 8th, - SG65: Colloquium in honour of Stefania Gnesi, Porto, Portugal
4. L. Mich – October 8th, - SG65: Colloquium in honour of Stefania Gnesi, Porto, Portugal
Francesca Rossi, IBM AI Ethics Global Leader –
WomeENcourage, Rome, September 2019
Main
concerns
5. L. Mich – October 8th, - SG65: Colloquium in honour of Stefania Gnesi, Porto, Portugal
The black
box problem
AI solutions support decisions and control
systems, giving advice and
recommendations that may imply serious
risks
Many AI systems run programs whose output
cannot usually be traced back to specific
parts of the input
Machine learning approaches train AI
systems to solve problems showing large
number of examples, but there is no
understanding of eg the concept of cats in
image recognition tasks
6. L. Mich – October 8th, - SG65: Colloquium in honour of Stefania Gnesi, Porto, Portugal
To address the black box problem we need
to be able to explain why a given solution
or behaviour has been chosen, providing
information on the data and knowledge
used, and on their processing, including
stakeholders
NL understanding could support explanation
of the output of AI systems embedding any
form of NLP (e.g. translators, chatbots)
Natural
Language
7. L. Mich – October 8th, - SG65: Colloquium in honour of Stefania Gnesi, Porto, Portugal
State of the
art of NLP
systems
8. L. Mich – October 8th, - SG65: Colloquium in honour of Stefania Gnesi, Porto, Portugal
AI systems
do not
“understand”
Translators, or personal assistants, do
use huge number of examples and are
able to to associate input and
output patterns, but they do not
understand the meaning of the words
Current NLP systems are not able to
sustain ”real” conversations
The best results for the Winograd
Schema, a test for NL understanding
are around 70%.
AI systems have to give the correct
meaning of pair of sentences, eg:
The trophy would not fit in the brown
suitcase because it was too big.
What was too big?
9. L. Mich – October 8th, - SG65: Colloquium in honour of Stefania Gnesi, Porto, Portugal
Need for
«Core
Semantics»
To ‘understand’ the meaning of texts as
‘we’ understand it is necessary to
represent the content, i.e, the semantics
of NL sentences
There is a large amount of ways in which
this same meaning could be expressed
A core semantics represents the content of
a sentence independently of the surface
description
10. L. Mich – October 8th, - SG65: Colloquium in honour of Stefania Gnesi, Porto, Portugal
CoreSystem
The internal representations of
CoreSystem, a prototype large-scale
NLP system:
ü Provide a clear representation of the
information and the underlying structures
ü Find common references to people,
places, organizations, events, etc.
ü Connect events along temporal causal
and spatial chains
ü Extract modal information, such as
desires, beliefs, plans, likes, duties, etc.
11. L. Mich – October 8th, - SG65: Colloquium in honour of Stefania Gnesi, Porto, Portugal
Conclusion
The practicability of the core
semantics has been tested using a
prototype large-scale NLP system
For accountability goals, results
indicates that a core semantics
produces textual representations that
can be easily understood and
checked by human