SlideShare a Scribd company logo
1 of 45
11
Explainable Artificial Intelligence (XAI)
and Machine Learning (ML) Models
Rune Sætre, NTNU
Trial Lecture*
25 JAN 2018
Based on slides by David Gunning
DARPA/Information Innovation Office (I2O),
11 AUG 2016, and 21 DEC 2017
* Slides available on https://www.slideshare.net/runesaetre or http://bit.ly/2GbRyYU
22
2
Rune Sætre
• 50% Associate Professor, Department of
Computer and Information Science (IDI)
• 50% Tekna NTNU (Labor Union) Leader
• Married to Yuko from Japan. Father to Ken
• MSc and PhD in AI (2003, 2006).
– http://BussTUC.idi.ntnu.no, endors.no
– Exchange to UCSB, LMU Munich, Tokyo
LMU Munich
33
3
Lecture Outline: XAI and ML models
• - Self introduction
• A: Motivation
• B: Explainable AI (XAI)
– Machine Learning Models (ML)
– Human Computer Interaction (HCI)
– Psychological Model of Explanation
• C: Ongoing xAI DARPA Project
– Technical Areas
– Participants & Examples
• Quiz (http://kahoot.it) and Questions
• Summary
55
Sensemaking Operations
Watson AlphaGo
A. Introduction - The Need for Explainable AI
UserAI
Syste
m • Why did you do that?
• Why not something else?
• When do you succeed?
• When do you fail?
• When can I trust you?
• How do I correct an error?
• We are entering a new
age of AI applications
• Machine learning is
the core technology
• Machine learning
models are opaque,
non-intuitive, and
difficult for people to
understand
©IBM ©Marcin
Bajer/Flickr
©Eric Keenan, U.S. Marine
Corps
©NASA.go
v
• The current generation of AI systems offer tremendous benefits, but their effectiveness will
be limited by the machine’s inability to explain its decisions and actions to users.
• Explainable AI will be essential if users are to understand, appropriately trust, and
effectively manage this incoming generation of artificially intelligent partners.
66
B. Program Scope – XAI Concept
Machine
Learning
Process
Training
Data
Learned
Function
Today
• Why did you do that?
• Why not something else?
• When do you succeed?
• When do you fail?
• When can I trust you?
• How do I correct an error?
Decision or
Recommendation
Task
User
New
Machine
Learning
Process
Training
Data
Explainable
Model
XAI
Explanation
Interface
• I understand why
• I understand why not
• I know when you succeed
• I know when you fail
• I know when to trust you
• I know why you erred
Task
User
77
• The target of XAI is an end user who:
o depends on decisions, recommendations, or actions of the system
o needs to understand the rationale for the system’s decisions to
understand, appropriately trust, and effectively manage the system
• The XAI concept is to:
o provide an explanation of individual decisions
o enable understanding of overall strengths & weaknesses
o convey an understanding of how the system will behave in the future
o convey how to correct the system’s mistakes (perhaps)
B. Program Scope – XAI Concept
New
Machine
Learning
Process
Training
Data
Explainable
Model
XAI
Explanation
Interface
• I understand why
• I understand why not
• I know when you succeed
• I know when you fail
• I know when to trust you
• I know why you erred
Task
User
88
• develop a range of new
or modified machine
learning techniques to
produce more
explainable models
• integrate state-of-the-art
HCI with new principles,
strategies, and
techniques to generate
effective explanations
• summarize, extend, and
apply current
psychological theories of
explanation to develop a
computational theory
B. Program Scope – XAI Development Challenges
New
Machine
Learning
Process
Training
Data
XAI
• I understand why
• I understand why not
• I know when you succeed
• I know when you fail
• I know when to trust you
• I know why you erred
Task
User
Explainable
Model
Explanation
Interface
Explainable
Models
Explanation
Interface
Psychology of
Explanation
99
• develop a range of new
or modified machine
learning techniques to
produce more
explainable models
• integrate state-of-the-art
HCI with new principles,
strategies, and
techniques to generate
effective explanations
• summarize, extend, and
apply current
psychological theories of
explanation to develop a
computational theory
B. Program Scope – XAI Development Challenges
New
Machine
Learning
Process
Training
Data
XAI
• I understand why
• I understand why not
• I know when you succeed
• I know when you fail
• I know when to trust you
• I know why you erred
Task
User
Explainable
Model
Explanation
Interface
Explainable
Models
Explanation
Interface
Psychology of
Explanation
TA 1: Explainable Learners TA 2: Psychological Models
1010
B.1 Explainable Models
PredictionAccuracy
Explainability
Learning Techniques (today)
Neural Nets
Statistical
Models
Ensemble
Methods
Decision
Trees
Deep
Learning
SVMs
AOGs
Bayesian
Belief Nets
Markov
Models
HBNs
MLNs
New
Approach
Create a suite of
machine learning
techniques that
produce more
explainable models,
while maintaining a
high level of
learning
performance
SRL
CRFs
Random
Forests
Graphical
Models
Explainability
(notional)
1111
B.1 Explainable Models
PredictionAccuracy
Explainability
Neural Nets
Statistical
Models
Ensemble
Methods
Decision
Trees
Deep
Learning
SVMs
AOGs
Bayesian
Belief Nets
Markov
Models
HBNs
MLNs
Deep Explanation
Modified deep learning
techniques to learn
explainable features
Create a suite of
machine learning
techniques that
produce more
explainable models,
while maintaining a
high level of
learning
performance
SRL
CRFs
Random
Forests
Graphical
Models
Learning Techniques (today)New
Approach
Explainability
(notional)
1212
B.1 Explainable Models
PredictionAccuracy
Explainability
Neural Nets
Statistical
Models
Ensemble
Methods
Decision
Trees
Deep
Learning
SVMs
AOGs
Bayesian
Belief Nets
Markov
Models
HBNs
MLNs
Deep Explanation
Modified deep learning
techniques to learn
explainable features
Create a suite of
machine learning
techniques that
produce more
explainable models,
while maintaining a
high level of
learning
performance
SRL
Interpretable Models
Techniques to learn more
structured, interpretable,
causal models
CRFs
Random
Forests
Graphical
Models
Learning Techniques (today)New
Approach
Explainability
(notional)
1313
B.1 Explainable Models
PredictionAccuracy
Explainability
Neural Nets
Statistical
Models
Ensemble
Methods
Decision
Trees
Deep
Learning
SVMs
AOGs
Bayesian
Belief Nets
Markov
Models
HBNs
MLNs
Model Induction
Techniques to infer an
explainable model from any
model as a black box
Deep Explanation
Modified deep learning
techniques to learn
explainable features
Create a suite of
machine learning
techniques that
produce more
explainable models,
while maintaining a
high level of
learning
performance
SRL
Interpretable Models
Techniques to learn more
structured, interpretable,
causal models
CRFs
Random
Forests
Graphical
Models
Learning Techniques (today)New
Approach
Explainability
(notional)
1414
• State of the Art Human Computer Interaction (HCI)
o UX design
o Visualization
o Language understanding & generation
• New Principles and Strategies
o Explanation principles
o Explanation strategies
o Explanation dialogs
• HCI in the Broadest Sense
o Cognitive science
o Mental models
• Joint Development as an Integrated System
o In conjunction with the Explainable Models
• Existing Machine Learning Techniques
o Also consider explaining existing ML techniques
B.2 Explanation Interface
1515
• Psychology Theories of Explanation
o Structure and function of explanation
o Role of explanation in reasoning and learning
o Explanation quality and utility
• Theory Summarization
o Summarize existing theories of explanation
o Organize and consolidate theories most useful for XAI
o Provide advice and consultation to XAI developers and evaluator
• Computational Model
o Develop computational model of theory
o Generate predictions of explanation quality and effectiveness
• Model Testing and Validation
o Test model against a set of standard evaluation settings
B.3 Psychology of Explanation
1616
B.4 Emphasis and Scope of XAI Research
Machine
Learning
Human
Computer
Interaction
End User
Explanation
XAI
Emphasis
Question
Answering
Dialogs
Visual
Analytics
Interactive
ML
1717
C: DoD Funding Categories
Category Definition
Basic Research
(6.1)
Systematic study directed toward greater
knowledge or understanding of the
fundamental aspects of phenomena and/or
observable facts without specific
applications in mind.
Applied Research
(6.2)
Systematic study to gain knowledge or
understanding necessary to determine the
means by which a recognized and specific
need may be met.
Technology
Development
(6.3)
Includes all efforts that have moved into the
development and integration of hardware
(and software) for field experiments and
tests.
XAI
1818
C: Explainable AI – Challenge Problem Areas
Learn a model to
perform the task
Data
Analytics
Classification
Learning Task
Recommend
Explanation
Explainable
Model
Explanation
Interface
An analyst is looking
for items of interest in
massive multimedia
data sets
Explainable
Model
Explanation
Interface
Autonomy
Reinforcement
Learning Task
Actions
Explanation
ArduPilot & SITL Simulation
An operator is
directing autonomous
systems to accomplish
a series of missions
Classifies items of
interest in large data set
Learns decision policies
for simulated missions
Explains why/why not
for recommended items
Analyst decides which
items to report, pursue
Explains behavior in an
after-action review
Operator decides which
future tasks to delegate
Use the explanation
to perform a task
Explain decisions,
actions to the user
Multimedia Data
Two trucks performing a
loading activity
©Getty
Images
©US Army
©Air Force Research Lab
©ArduPikot.org
2020
• Machine learning to classify items, events, or patterns of interest
o In heterogeneous, multimedia data
o Include structured/semi-structured data in addition to images and video
o Require meaningful explanations that are not obvious in video alone
• Proposers should describe:
o Data sets and training data (including background knowledge sources)
o Classification function to be learned
o Types of explanations to be provided
o User decisions to be supported
• Challenge problem progression
o Describe an appropriate progression of test problems to support your
development strategy
C.1 Data Analysis
2121
• Reinforcement learning to learn sequential decision policies
o For a simulated autonomous agent (e.g., UAV)
o Explanations may cover other needed planning, decision, or control
modules, as well as decision policies learned through reinforcement
learning
o Explain high level decisions that would be meaningful to the end user
(i.e., not low level motor control)
• Proposers should describe:
o Simulation environment
o Types of missions to be covered
o Decision policies and mission tasks to be learned
o Types of explanations to be provided
o User decisions to be supported
• Challenge problem progression
o Describe an appropriate progression of test problems to support your
development strategy
C.2 Autonomy
2222
• XAI developers are presented with a problem domain
• Apply machine learning techniques to learn an explainable model
• Combine with the explanation interface to construct an explainable
system
• The explainable system delivers and explains decisions or actions to
a user who is performing domain tasks
• The system’s decisions and explanations contribute (positively or
negatively) to the user’s performance of the domain tasks
• The evaluator measures the learning performance and explanation
effectiveness
• The evaluator also conducts evaluations of existing machine
learning techniques to establish baseline measures for learning
performance and explanation effectiveness
C.3 Evaluation – Evaluation Sequence
2323
C.3 Evaluation – Evaluation Framework
Explanation Framework
Task
Decision
Recommendation,
Decision or
Action
Explanation
The system takes
input from the current
task and makes a
recommendation,
decision, or action
The system provides
an explanation to the
user that justifies its
recommendation,
decision, or action
The user
makes a
decision
based on the
explanation
Explainable
Model
Explanation
Interface
XAI System
Measure of Explanation
Effectiveness
User Satisfaction
• Clarity of the explanation (user rating)
• Utility of the explanation (user rating)
Mental Model
• Understanding individual decisions
• Understanding the overall model
• Strength/weakness assessment
• ‘What will it do’ prediction
• ‘How do I intervene’ prediction
Task Performance
• Does the explanation improve the
user’s decision, task performance?
• Artificial decision tasks introduced to
diagnose the user’s understanding
Trust Assessment
• Appropriate future use and trust
Correctablity (Extra Credit)
• Identifying errors
• Correcting errors, Continuous training
2424
• TA1: Explainable Learners
o Multiple TA1 teams will develop prototype explainable learning systems that
include both an explainable model and an explanation interface
• TA2: Psychological Model of Explanation
o At least one TA2 team will summarize current psychological theories of explanation
and develop a computational model of explanation from those theories
D. Technical Areas and Evaluation
Challenge
Problem
Areas
Evaluation
Framework
Evaluator
TA 2:
Psychological
Model of
Explanation
TA 1:
Explainable
Learners
Autonomy
ArduPilot &
SITL Simulation
Data Analytics
Multimedia Data
Explanation
Measures
• User Satisfaction
• Mental Model
• Task Performance
• Trust Assessment
• Correctability
Learning
Performance
Explanation
Effectiveness
• Psych. Theory
of Explanation
• Computational
Model
• Consulting
Teams that provide
prototype systems
with both components:
• Explainable
Model
• Explanation
Interface
Deep
Learning
Teams
Interpretable
Model
Teams
Model
Induction
Teams
2525
D: Challenge Problem Candidates
Analytics Autonomy
Starcraft2 ELF-MiniRTSMovieQA CLEVR
ActivityNet
ArduPilot Driving Simulator
Visual Question Answering
Activity Recognition
Strategy Games
Vehicle Control
Distribution Statement "A" (Approved for Public Release, Distribution Unlimited)
2929
Schedule and Milestones
• Technical Area 1 Milestones:
• Demonstrate the explainable learners against problems proposed by the
developers (Phase 1)
• Demonstrate the explainable learners against common problems (Phase 2)
• Deliver software libraries and toolkits (at the end of Phase 2)
APR MAY JUN JUL AUG SEP OCT NOV DEC JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC JAN FEB MAR APR MAY
KickOff Progress Report Tech Demos Eval 1 Results Eval 2 Results Final
Meetings
Refine & Test Explainable
Learners
(against common problems)
Eval
3
Deliver Software
Toolkits
TA 2
Summarize Current Psychological
Theories of Explanation
Develop Computational Model of
Explanation
Refine & Test
Computational Model
Deliver
Computational
Model
TA 1
Develop & Demonstrate Explainable Models
(against proposed problems)
Eval
1
Refine & Test Explainable
Learners
(against common problems)
Eval
2
Evaluator Define Evaluation Framework
Prep for
Eval 1
Eval
1
Analyze
Results
2021
PHASE 1: Technology Demonstrations PHASE 2: Comparative Evaluations
Prep for Eval 2
2017 2018 2019 2020
Eval
2
Analyze
Results
Prep for Eval 3
Eval
3
Analysze Results
&
Accept Toolkits
3131
Schedule and Milestones – Phase 1
APR MAY JUN JUL AUG SEP OCT NOV DEC JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV
KickOff Progress Report Tech Demos
Evaluator Define Evaluation Framework
Prep for
Eval 1
Eval
1
2017 2018
PHASE 1: Technology Demonstrations
Meetings
TA 2
Summarize Current Psychological
Theories of Explanation
Develop
Computational
Model
TA 1
Develop & Demonstrate Explainable Models
(against proposed problems)
Eval
1
3434
CP Performer Explainable Model Explanation Interface
Both
UC Berkeley Deep Learning Reflexive & Rational
Charles River Causal Modeling Narrative Generation
UCLA Pattern Theory+ 3-level Explanation
Autonomy
Oregon State Adaptive Programs Acceptance Testing
PARC Cognitive Modeling Interactive Training
CMU Explainable RL (XRL) XRL Interaction
Analytics
SRI International Deep Learning Show & Tell Explanation
Raytheon BBN Deep Learning Argumentation & Pedagogy
UT Dallas Probabilistic Logic Decision Diagrams
Texas A&M Mimic Learning Interactive Visualization
Rutgers Model Induction Bayesian Teaching
XAI Developers (TA1)
3838
Explanation by Selection of Teaching Examples
(Rutgers)
BAYESIAN TEACHING for optimal selection of examples for machine explanation
TRAINING DATA
because it is similar to these
examples
and dissimilar to these
examples
brow
lowered
lips thinned/
pushed out
nostrils
flared
cheekbones
raised
mouth
raised
chin pushed
out/up
EXPLAINABLE CLASSIFICATION
MODEL
This face is
Angry
3939
Autonomy (PARC, OSU)
Common Ground Builder
• Explain
• Train
• Evaluate
Robotics Curriculum
An interactive sensemaking system to explain
the learned performance capabilities of a UAS
flying in an ArduPilot simulation testbed
Tools for explaining deep adaptive programs
and discovering best principles for
designing explanation user interfaces
xNN
Annotation Aware
Reinforcement
Learning
Explanation
Learner
Deep Adaptive Program
Saliency
Visualizer
Game Engine
Interactive
Naming
Interface
xFSM
Visual
Words
Common Ground Learning and
Explanation (COGLE)
Explanation-Informed Acceptance Testing of
Deep Adaptive Programs (xACT)
Decision Net
4040
• Xerox PARC COGLE (Multi-million $ contract)
o COmmon Ground Learning and Explanation
o Machine Learning research Reinforcement Learning (RL)
• University of Edinburgh
• University of Michigan
o Cognitive modeling research “Game of Drones”
• Carnegie Mellon University (CMU)
• West Point (Army)
o User Modeling and Evaluation
• Florida Institute for Human & Machine Cognition
Challenge Participants: PARC
4444
Summary: A reasonable goal? A good solution
4545
45
Quiz and Questions
• Kahoot Jumble
– https://create.kahoot.it/create#/jumble/3
– Begynner med et oppvarmingsspørsmål
• Comments or Questions?
Easy reading, with more sources:
http://www.ccri.com/2016/11/10/explainable-artificial-intelligence-xai/
https://hackernoon.com/what-is-explainable-ai-how-does-it-affect-your-job-fcfa418bfce1
4646
Team Characteristics and Details
• More details about the 13 teams that joined the challenge
4747
Deeply Explainable Artificial Intelligence
Berkeley/BU/U. Amsterdam/Kitware
• ArduPilot and OpenAI
Gym Simulations
• Explain implicit (latent)
nodes by training
additional DL models
• Explain explicit nodes
thru Neural Module
Networks (NMNs).
• Reflexive explanations
(that arise directly from
the model)
• Rational explanations
(that come from
reasoning about user’s
beliefs)
Explainable Model Explanation Interface Challenge Problem
• Pieter Abbeel (Berkeley)
• Tom Griffiths (Berkeley)
• Kate Saenko (BU)
• Zeynep Akata (U. Amsterdam)
• Dan Klein (Berkeley)
• John Canny (Berkeley)
• Anca Dragan (Berkeley)
• Anthony Hoogs (Kitware)
• PI: Trevor Darrell (Berkeley)
Deep Learning Reflexive & Rational Autonomy
Data Analytics
• Visual QA and
Multimedia Event QA
4848
CAMEL: Causal Models to Explain Learning
CRA/U. Mass/Brown
• Minecraft, Starcraft
• Experiment with the
learned model (as a grey
box) to learn an
explainable, causal,
probabilistic
programming model
• Interactive visualization
based on the generation
of temporal, spatial
narratives from the causal,
probabilistic models
Explainable Model Explanation Interface Challenge Problem
• Avi Pfeffer (CRA)
• David Jensen (U. Mass)
• Michael Littman (Brown)
• PI: Brian Ruttenberg (CRA)
Model Induction
Causal Models
Narrative Generation Autonomy
Data Analytics
• Pedestrian Detection
(INRIA), Activity
Recognition
(ActivityNet)
• James Niehaus (CRA)
• Emilie Roth (Roth Cognitive Engineering)
• Joe Gorman(CRA)
• James Tittle (CRA)
4949
Learning and Communicating Explainable Representations for
Analytics and Autonomy
UCLA/OSU/Michigan State
• Integrated
representation across an
entropy spectrum:
• Deep Neural Nets
• Stochastic And-Or-
Graphs (AOG)
• Predicate Calculus
• Integrate 3 levels of
explanation:
• Concept compositions
• Causal and
counterfactual
reasoning
• Utility explanations
Explainable Model Explanation Interface Challenge Problem
• Ying Nian Wu (UCLA)
• Sinisa Todorovic (OSU)
• PI: Song-Chun Zhu (UCLA)
Pattern Theory+ 3-Level Explanation Autonomy
Data Analytics
• Understanding
complex multimedia
events
• Joyce Chai (Michigan State)
• Humanoid robot
behavior and VR
simulation platform
5050
xACT: Explanation-Informed Acceptance Testing of Deep
Adaptive Programs
OSU
• Real-Time Strategy
Games based on custom
designed game engine
designed to support
explanation
• Possible use of Starcraft
• Explainable Deep
Adaptive Programs
(xDAPs) – a new
combination of Adaptive
Programs, Deep Learning
and explainability
• Provides a visual & NL
explanation interface for
acceptance testing by
test pilots based on
Information Foraging
Theory
Explainable Model Explanation Interface Challenge Problem
• Tom Dietterich (OSU)
• Fuxin Li (OSU)
• Prasad Tadepalli (OSU)
• Weng-Keen Wong (OSU)
• Margaret Burnett (OSU)
• Martin Erwig (OSU)
• Liang Huang (OSU)
• PI: Alan Fern (OSU)
Adaptive Programs Acceptance Testing Autonomy
5151
COGLE: Common Ground Learning and Explanation
PARC/CMU/U. Edinburgh/U. Mich./West Point
• ArduPilot simulation
environment
• Value of Explanation
(VoE) framework for
measuring explanation
effectiveness
• 3-layer architecture:
• Learning Layer (DNNs)
• Cognitive Layer (ACT-R
Cog. Model)
• Explanation Layer
(HCI)
• Interactive visualization
of states, actions, policies
& values
• Includes a module for
test pilots to refine and
train the system
Explainable Model Explanation Interface Challenge Problem
• Sricharan Kumar (PARC)
• Honglak Lee (U. Mich.)
• Subramanian Ramamoorthy
(U. Edinburgh)
• Christian Lebiere (CMU)
• John Anderson (CMU)
• Robert Thomson (USMA)
• Michael Youngblood (PARC)
• PI: Mark Stefik (PARC)
Cognitive Model Interactive Training Autonomy
5252
XRL: Explainable Reinforcement Learning for AI Autonomy
CMU/Stanford
• Open AI Gym
• Autonomy in the
electrical grid
• Mobile service robots
• Self-improving
educational software
• Create a new scientific
discipline for Explainable
Reinforcement Learning
with work on new
algorithms and
representations
• Interactive explanations
of dynamic systems
• Human-machine
interaction to improve
performance
Explainable Model Explanation Interface Challenge Problem
• Zico Kolter (CMU)
• Pradeep Ravikumar (CMU)
• Manuela Veloso (CMU)
• Emma Brunskill (Stanford)
• PI: Geoff Gordon (CMU)
XRL Models XRL Interaction Autonomy
5353
DARE: Deep Attention-based Representations for Explanation
SRI/U. Toronto/UCSD/U. Guelph
• Visual Question
Answering (VQA) using
Visual Gnome, Flickr30
• MovieQA
• Multiple deep learning
techniques:
• Attention-based
mechanisms
• Compositional NMNs
• GANs
• DNN visualization
• Query evidence that
explains DNN decisions
• Generate natural
language justifications
Explainable Model Explanation Interface Challenge Problem
• Richard R. Zemel (U. Toronto)
• Sanja Fidler (U. Toronto)
• David Duvenaud (U. Toronto)
• Graham Taylor (U. Guelph)
• Jürgen Schulze (UCSD)
• PIs: Giedrius Burachas (SRI), Mohamed Amer (SRI)
Deep Learning Show-and-Tell
Explanations
Data Analytics
• Shalini Ghosh (SRI)
• Avi Ziskind (SRI)
• Michael Wessel (SRI)
5454
EQUAS: Explainable QUestion Answering System
Raytheon BBN/GA Tech /UT Austin/MIT
• Visual Question
Answering (VQA),
beginning with images
and progressing to video
• Semantic labelling of
DNN neurons
• DNN audit trail
construction
• Gradient-weighted Class
Activation Mapping
• Comprehensive strategy
based on argumentation
theory
• NL generation
• DNN visualization
Explainable Model Explanation Interface Challenge Problem
• Devi Parikh (GA Tech)
• Dhruv Batra (GA Tech)
• PI: William Ferguson (Raytheon BBN)
Deep Learning Argumentation
Theory
Data Analytics
• Antonio Torralba (MIT)
• Ray Mooney (UT Austin)
5555
Tractable Probabilistic Logic Models: A New, Deep Explainable
Representation
UT Dallas/UCLA/Texas A&M/IIT-Delhi
• Infer activities in
multimodal data (video
and text)
• Using the Wetlab
(biology) and TACoS
(cooking) datasets
• Tractable Probabilistic
Logic Models (TPLMs) –
an important class of
(non-deep learning)
interpretable models
• Enables users to explore
and correct the
underlying model as well
as add background
knowledge
Explainable Model Explanation Interface Challenge Problem
• PI: Vibhav Gogate (UT Dallas)
Probabilistic Logic Probabilistic
Decision Diagrams
Data Analytics
• Adnan Darwiche (UCLA)
• Guy Van Den Broeck (UCLA)
• Nicholas Ruozzi (UT Dallas)
• Eric Ragan (Texas A&M)
• Parag Singla (IIT-Delhi)
5656
Transforming Deep Learning to Harness the Interpretability of
Shallow Models: An Interactive End-to-End System
Texas A&M/Wash. State
• Multiple tasks using data
from Twitter, Facebook,
ImageNet, UCI, NIST and
Kaggle
• Metrics for explanation
effectiveness
• Develop a mimic
learning framework that
combines deep learning
models for prediction
and shallow models for
explanations
• Interactive visualization
over multiple views, using
heat maps & topic
modeling clusters to show
predictive features
Explainable Model Explanation Interface Challenge Problem
• Eric Ragan (Texas A&M)
• PI: Xia Hu (Texas A&M)
Mimic Learning Interactive
Visualization
Data Analytics
• Shuiwang Ji (Wash. State)
5757
Model Explanation by Optimal Selection of Teaching Examples
Rutgers
• Movie descriptions
• Image processing
• Caption data
• Movie events
• Human motion events
• Select the optimal
training examples to
explain model decisions
based on Bayesian
Teaching
• Example-based
explanation of:
• the full model
• user-selected sub-
structure
• user submitted
examples
Explainable Model Explanation Interface Challenge Problem
• PI: Patrick Shafto (Rutgers)
Model Induction Bayesian Teaching Data Analytics
• Scott Cheng-Hsin Yang (Rutgers)
5858
Naturalistic Decision Making Foundations of Explainable AI
IHMC/MacroCognition/Michigan Tech
• Conduct interactive
assessment and formal
human experiments
• Validate the model
• Develop metrics of
explanation
effectiveness
• Extensive review of
relevant psychological
theories
• Extend the theory of
Naturalistic Decision
Making to cover
explanation
• Represent reductionist
mental models that
humans develop as part
of the explanatory
process
• Including mental
simulation
• Gary Klein (MacroCognition)
• Shane T. Mueller (Michigan
Tech)
• Jordan Litman (IHMC
Psychometrician)
• Simon Attfield (Middlesex
University-London)
• Peter Pirolli (IHMC)
• PI: Robert R. Hoffman (IHMC)
Naturalistic Theory Bayesian Framework Experiments
• William J. Clancey (IHMC)
• COL Timothy M. Cullen
(SAASS)
Literature Review Computational Model Model Validation
5959
XAI Evaluation
NRL
• Evaluation protocols
• Training environment
• Training data
• Simulation environ.
• Testing environment
• Subjects
• Web infrastructure
• Baseline systems
Challenge Problems Evaluation Framework Measurement
• Mike Pazzani (UC Riverside)
• PI: David Aha (NRL)
Analytics
• Justin Karneeb (Knexus)
• Matt Molineaux (Knexus)
• Leslie Smith (NRL)
Two trucks
performing a
loading activity
Autonomy
Explanation Effectiveness
LearningPerformance
Today
Tomorrow

More Related Content

What's hot

Continual Learning with Deep Architectures - Tutorial ICML 2021
Continual Learning with Deep Architectures - Tutorial ICML 2021Continual Learning with Deep Architectures - Tutorial ICML 2021
Continual Learning with Deep Architectures - Tutorial ICML 2021
Vincenzo Lomonaco
 
MachineLearning.ppt
MachineLearning.pptMachineLearning.ppt
MachineLearning.ppt
butest
 

What's hot (20)

Replicable Evaluation of Recommender Systems
Replicable Evaluation of Recommender SystemsReplicable Evaluation of Recommender Systems
Replicable Evaluation of Recommender Systems
 
Introduction to Interpretable Machine Learning
Introduction to Interpretable Machine LearningIntroduction to Interpretable Machine Learning
Introduction to Interpretable Machine Learning
 
Active learning
Active learningActive learning
Active learning
 
Lecture 2 Basic Concepts in Machine Learning for Language Technology
Lecture 2 Basic Concepts in Machine Learning for Language TechnologyLecture 2 Basic Concepts in Machine Learning for Language Technology
Lecture 2 Basic Concepts in Machine Learning for Language Technology
 
[系列活動] 機器學習速遊
[系列活動] 機器學習速遊[系列活動] 機器學習速遊
[系列活動] 機器學習速遊
 
Joseph Jay Williams - WESST - Bridging Research via MOOClets and Collaborativ...
Joseph Jay Williams - WESST - Bridging Research via MOOClets and Collaborativ...Joseph Jay Williams - WESST - Bridging Research via MOOClets and Collaborativ...
Joseph Jay Williams - WESST - Bridging Research via MOOClets and Collaborativ...
 
Continual Learning with Deep Architectures - Tutorial ICML 2021
Continual Learning with Deep Architectures - Tutorial ICML 2021Continual Learning with Deep Architectures - Tutorial ICML 2021
Continual Learning with Deep Architectures - Tutorial ICML 2021
 
Choosing a Machine Learning technique to solve your need
Choosing a Machine Learning technique to solve your needChoosing a Machine Learning technique to solve your need
Choosing a Machine Learning technique to solve your need
 
Module 1 introduction to machine learning
Module 1  introduction to machine learningModule 1  introduction to machine learning
Module 1 introduction to machine learning
 
The zen of predictive modelling
The zen of predictive modellingThe zen of predictive modelling
The zen of predictive modelling
 
PyData SF 2016 --- Moving forward through the darkness
PyData SF 2016 --- Moving forward through the darknessPyData SF 2016 --- Moving forward through the darkness
PyData SF 2016 --- Moving forward through the darkness
 
Machine Learning for Q&A Sites: The Quora Example
Machine Learning for Q&A Sites: The Quora ExampleMachine Learning for Q&A Sites: The Quora Example
Machine Learning for Q&A Sites: The Quora Example
 
A Machine Learning Primer,
A Machine Learning Primer,A Machine Learning Primer,
A Machine Learning Primer,
 
Applied Artificial Intelligence Unit 3 Semester 3 MSc IT Part 2 Mumbai Univer...
Applied Artificial Intelligence Unit 3 Semester 3 MSc IT Part 2 Mumbai Univer...Applied Artificial Intelligence Unit 3 Semester 3 MSc IT Part 2 Mumbai Univer...
Applied Artificial Intelligence Unit 3 Semester 3 MSc IT Part 2 Mumbai Univer...
 
MachineLearning.ppt
MachineLearning.pptMachineLearning.ppt
MachineLearning.ppt
 
Strategies for Practical Active Learning, Robert Munro
Strategies for Practical Active Learning, Robert MunroStrategies for Practical Active Learning, Robert Munro
Strategies for Practical Active Learning, Robert Munro
 
Managing machine learning
Managing machine learningManaging machine learning
Managing machine learning
 
Statistics assignment help
Statistics assignment helpStatistics assignment help
Statistics assignment help
 
Introduction to machine learning and deep learning
Introduction to machine learning and deep learningIntroduction to machine learning and deep learning
Introduction to machine learning and deep learning
 
Lecture 1: Introduction to the Course (Practical Information)
Lecture 1: Introduction to the Course (Practical Information)Lecture 1: Introduction to the Course (Practical Information)
Lecture 1: Introduction to the Course (Practical Information)
 

Similar to 2018.01.25 rune sætre_triallecture_xai_v2

​​Explainability in AI and Recommender systems: let’s make it interactive!
​​Explainability in AI and Recommender systems: let’s make it interactive!​​Explainability in AI and Recommender systems: let’s make it interactive!
​​Explainability in AI and Recommender systems: let’s make it interactive!
Eindhoven University of Technology / JADS
 
Term Paper VirtualizationDue Week 10 and worth 210 pointsThis.docx
Term Paper VirtualizationDue Week 10 and worth 210 pointsThis.docxTerm Paper VirtualizationDue Week 10 and worth 210 pointsThis.docx
Term Paper VirtualizationDue Week 10 and worth 210 pointsThis.docx
mattinsonjanel
 

Similar to 2018.01.25 rune sætre_triallecture_xai_v2 (20)

Human in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AIHuman in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AI
 
problem identification in research
problem identification in researchproblem identification in research
problem identification in research
 
ODSC APAC 2022 - Explainable AI
ODSC APAC 2022 - Explainable AIODSC APAC 2022 - Explainable AI
ODSC APAC 2022 - Explainable AI
 
ML crash course
ML crash courseML crash course
ML crash course
 
Applying AI to software engineering problems: Do not forget the human!
Applying AI to software engineering problems: Do not forget the human!Applying AI to software engineering problems: Do not forget the human!
Applying AI to software engineering problems: Do not forget the human!
 
The state of the art in integrating machine learning into visual analytics
The state of the art in integrating machine learning into visual analyticsThe state of the art in integrating machine learning into visual analytics
The state of the art in integrating machine learning into visual analytics
 
​​Explainability in AI and Recommender systems: let’s make it interactive!
​​Explainability in AI and Recommender systems: let’s make it interactive!​​Explainability in AI and Recommender systems: let’s make it interactive!
​​Explainability in AI and Recommender systems: let’s make it interactive!
 
A Topic Model of Analytics Job Adverts (The Operational Research Society 55th...
A Topic Model of Analytics Job Adverts (The Operational Research Society 55th...A Topic Model of Analytics Job Adverts (The Operational Research Society 55th...
A Topic Model of Analytics Job Adverts (The Operational Research Society 55th...
 
Planning For Reuse 2009 03 09
Planning For Reuse 2009 03 09Planning For Reuse 2009 03 09
Planning For Reuse 2009 03 09
 
A Topic Model of Analytics Job Adverts (Operational Research Society Annual C...
A Topic Model of Analytics Job Adverts (Operational Research Society Annual C...A Topic Model of Analytics Job Adverts (Operational Research Society Annual C...
A Topic Model of Analytics Job Adverts (Operational Research Society Annual C...
 
Starting a career in data science
Starting a career in data scienceStarting a career in data science
Starting a career in data science
 
Machine_Learning.pptx
Machine_Learning.pptxMachine_Learning.pptx
Machine_Learning.pptx
 
Data-X-Sparse-v2
Data-X-Sparse-v2Data-X-Sparse-v2
Data-X-Sparse-v2
 
Structured design: Modular style for modern content
Structured design: Modular style for modern contentStructured design: Modular style for modern content
Structured design: Modular style for modern content
 
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneGDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
 
Interpretable Machine Learning
Interpretable Machine LearningInterpretable Machine Learning
Interpretable Machine Learning
 
Data-X-v3.1
Data-X-v3.1Data-X-v3.1
Data-X-v3.1
 
Crafting a Compelling Data Science Resume
Crafting a Compelling Data Science ResumeCrafting a Compelling Data Science Resume
Crafting a Compelling Data Science Resume
 
Unit No 6 Design Patterns.pptx
Unit No 6 Design Patterns.pptxUnit No 6 Design Patterns.pptx
Unit No 6 Design Patterns.pptx
 
Term Paper VirtualizationDue Week 10 and worth 210 pointsThis.docx
Term Paper VirtualizationDue Week 10 and worth 210 pointsThis.docxTerm Paper VirtualizationDue Week 10 and worth 210 pointsThis.docx
Term Paper VirtualizationDue Week 10 and worth 210 pointsThis.docx
 

Recently uploaded

Disentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTDisentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOST
Sérgio Sacani
 
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
PirithiRaju
 
DIFFERENCE IN BACK CROSS AND TEST CROSS
DIFFERENCE IN  BACK CROSS AND TEST CROSSDIFFERENCE IN  BACK CROSS AND TEST CROSS
DIFFERENCE IN BACK CROSS AND TEST CROSS
LeenakshiTyagi
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Sérgio Sacani
 
The Philosophy of Science
The Philosophy of ScienceThe Philosophy of Science
The Philosophy of Science
University of Hertfordshire
 
Formation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disksFormation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disks
Sérgio Sacani
 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
PirithiRaju
 

Recently uploaded (20)

CELL -Structural and Functional unit of life.pdf
CELL -Structural and Functional unit of life.pdfCELL -Structural and Functional unit of life.pdf
CELL -Structural and Functional unit of life.pdf
 
fundamental of entomology all in one topics of entomology
fundamental of entomology all in one topics of entomologyfundamental of entomology all in one topics of entomology
fundamental of entomology all in one topics of entomology
 
Botany krishna series 2nd semester Only Mcq type questions
Botany krishna series 2nd semester Only Mcq type questionsBotany krishna series 2nd semester Only Mcq type questions
Botany krishna series 2nd semester Only Mcq type questions
 
Spermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatidSpermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatid
 
Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​
 
Disentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTDisentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOST
 
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
 
Green chemistry and Sustainable development.pptx
Green chemistry  and Sustainable development.pptxGreen chemistry  and Sustainable development.pptx
Green chemistry and Sustainable development.pptx
 
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bNightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
 
DIFFERENCE IN BACK CROSS AND TEST CROSS
DIFFERENCE IN  BACK CROSS AND TEST CROSSDIFFERENCE IN  BACK CROSS AND TEST CROSS
DIFFERENCE IN BACK CROSS AND TEST CROSS
 
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
 
The Philosophy of Science
The Philosophy of ScienceThe Philosophy of Science
The Philosophy of Science
 
Zoology 4th semester series (krishna).pdf
Zoology 4th semester series (krishna).pdfZoology 4th semester series (krishna).pdf
Zoology 4th semester series (krishna).pdf
 
Formation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disksFormation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disks
 
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 60009654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
 
Natural Polymer Based Nanomaterials
Natural Polymer Based NanomaterialsNatural Polymer Based Nanomaterials
Natural Polymer Based Nanomaterials
 
VIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C PVIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C P
 
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptx
Unlocking  the Potential: Deep dive into ocean of Ceramic Magnets.pptxUnlocking  the Potential: Deep dive into ocean of Ceramic Magnets.pptx
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptx
 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
 

2018.01.25 rune sætre_triallecture_xai_v2

  • 1. 11 Explainable Artificial Intelligence (XAI) and Machine Learning (ML) Models Rune Sætre, NTNU Trial Lecture* 25 JAN 2018 Based on slides by David Gunning DARPA/Information Innovation Office (I2O), 11 AUG 2016, and 21 DEC 2017 * Slides available on https://www.slideshare.net/runesaetre or http://bit.ly/2GbRyYU
  • 2. 22 2 Rune Sætre • 50% Associate Professor, Department of Computer and Information Science (IDI) • 50% Tekna NTNU (Labor Union) Leader • Married to Yuko from Japan. Father to Ken • MSc and PhD in AI (2003, 2006). – http://BussTUC.idi.ntnu.no, endors.no – Exchange to UCSB, LMU Munich, Tokyo LMU Munich
  • 3. 33 3 Lecture Outline: XAI and ML models • - Self introduction • A: Motivation • B: Explainable AI (XAI) – Machine Learning Models (ML) – Human Computer Interaction (HCI) – Psychological Model of Explanation • C: Ongoing xAI DARPA Project – Technical Areas – Participants & Examples • Quiz (http://kahoot.it) and Questions • Summary
  • 4. 55 Sensemaking Operations Watson AlphaGo A. Introduction - The Need for Explainable AI UserAI Syste m • Why did you do that? • Why not something else? • When do you succeed? • When do you fail? • When can I trust you? • How do I correct an error? • We are entering a new age of AI applications • Machine learning is the core technology • Machine learning models are opaque, non-intuitive, and difficult for people to understand ©IBM ©Marcin Bajer/Flickr ©Eric Keenan, U.S. Marine Corps ©NASA.go v • The current generation of AI systems offer tremendous benefits, but their effectiveness will be limited by the machine’s inability to explain its decisions and actions to users. • Explainable AI will be essential if users are to understand, appropriately trust, and effectively manage this incoming generation of artificially intelligent partners.
  • 5. 66 B. Program Scope – XAI Concept Machine Learning Process Training Data Learned Function Today • Why did you do that? • Why not something else? • When do you succeed? • When do you fail? • When can I trust you? • How do I correct an error? Decision or Recommendation Task User New Machine Learning Process Training Data Explainable Model XAI Explanation Interface • I understand why • I understand why not • I know when you succeed • I know when you fail • I know when to trust you • I know why you erred Task User
  • 6. 77 • The target of XAI is an end user who: o depends on decisions, recommendations, or actions of the system o needs to understand the rationale for the system’s decisions to understand, appropriately trust, and effectively manage the system • The XAI concept is to: o provide an explanation of individual decisions o enable understanding of overall strengths & weaknesses o convey an understanding of how the system will behave in the future o convey how to correct the system’s mistakes (perhaps) B. Program Scope – XAI Concept New Machine Learning Process Training Data Explainable Model XAI Explanation Interface • I understand why • I understand why not • I know when you succeed • I know when you fail • I know when to trust you • I know why you erred Task User
  • 7. 88 • develop a range of new or modified machine learning techniques to produce more explainable models • integrate state-of-the-art HCI with new principles, strategies, and techniques to generate effective explanations • summarize, extend, and apply current psychological theories of explanation to develop a computational theory B. Program Scope – XAI Development Challenges New Machine Learning Process Training Data XAI • I understand why • I understand why not • I know when you succeed • I know when you fail • I know when to trust you • I know why you erred Task User Explainable Model Explanation Interface Explainable Models Explanation Interface Psychology of Explanation
  • 8. 99 • develop a range of new or modified machine learning techniques to produce more explainable models • integrate state-of-the-art HCI with new principles, strategies, and techniques to generate effective explanations • summarize, extend, and apply current psychological theories of explanation to develop a computational theory B. Program Scope – XAI Development Challenges New Machine Learning Process Training Data XAI • I understand why • I understand why not • I know when you succeed • I know when you fail • I know when to trust you • I know why you erred Task User Explainable Model Explanation Interface Explainable Models Explanation Interface Psychology of Explanation TA 1: Explainable Learners TA 2: Psychological Models
  • 9. 1010 B.1 Explainable Models PredictionAccuracy Explainability Learning Techniques (today) Neural Nets Statistical Models Ensemble Methods Decision Trees Deep Learning SVMs AOGs Bayesian Belief Nets Markov Models HBNs MLNs New Approach Create a suite of machine learning techniques that produce more explainable models, while maintaining a high level of learning performance SRL CRFs Random Forests Graphical Models Explainability (notional)
  • 10. 1111 B.1 Explainable Models PredictionAccuracy Explainability Neural Nets Statistical Models Ensemble Methods Decision Trees Deep Learning SVMs AOGs Bayesian Belief Nets Markov Models HBNs MLNs Deep Explanation Modified deep learning techniques to learn explainable features Create a suite of machine learning techniques that produce more explainable models, while maintaining a high level of learning performance SRL CRFs Random Forests Graphical Models Learning Techniques (today)New Approach Explainability (notional)
  • 11. 1212 B.1 Explainable Models PredictionAccuracy Explainability Neural Nets Statistical Models Ensemble Methods Decision Trees Deep Learning SVMs AOGs Bayesian Belief Nets Markov Models HBNs MLNs Deep Explanation Modified deep learning techniques to learn explainable features Create a suite of machine learning techniques that produce more explainable models, while maintaining a high level of learning performance SRL Interpretable Models Techniques to learn more structured, interpretable, causal models CRFs Random Forests Graphical Models Learning Techniques (today)New Approach Explainability (notional)
  • 12. 1313 B.1 Explainable Models PredictionAccuracy Explainability Neural Nets Statistical Models Ensemble Methods Decision Trees Deep Learning SVMs AOGs Bayesian Belief Nets Markov Models HBNs MLNs Model Induction Techniques to infer an explainable model from any model as a black box Deep Explanation Modified deep learning techniques to learn explainable features Create a suite of machine learning techniques that produce more explainable models, while maintaining a high level of learning performance SRL Interpretable Models Techniques to learn more structured, interpretable, causal models CRFs Random Forests Graphical Models Learning Techniques (today)New Approach Explainability (notional)
  • 13. 1414 • State of the Art Human Computer Interaction (HCI) o UX design o Visualization o Language understanding & generation • New Principles and Strategies o Explanation principles o Explanation strategies o Explanation dialogs • HCI in the Broadest Sense o Cognitive science o Mental models • Joint Development as an Integrated System o In conjunction with the Explainable Models • Existing Machine Learning Techniques o Also consider explaining existing ML techniques B.2 Explanation Interface
  • 14. 1515 • Psychology Theories of Explanation o Structure and function of explanation o Role of explanation in reasoning and learning o Explanation quality and utility • Theory Summarization o Summarize existing theories of explanation o Organize and consolidate theories most useful for XAI o Provide advice and consultation to XAI developers and evaluator • Computational Model o Develop computational model of theory o Generate predictions of explanation quality and effectiveness • Model Testing and Validation o Test model against a set of standard evaluation settings B.3 Psychology of Explanation
  • 15. 1616 B.4 Emphasis and Scope of XAI Research Machine Learning Human Computer Interaction End User Explanation XAI Emphasis Question Answering Dialogs Visual Analytics Interactive ML
  • 16. 1717 C: DoD Funding Categories Category Definition Basic Research (6.1) Systematic study directed toward greater knowledge or understanding of the fundamental aspects of phenomena and/or observable facts without specific applications in mind. Applied Research (6.2) Systematic study to gain knowledge or understanding necessary to determine the means by which a recognized and specific need may be met. Technology Development (6.3) Includes all efforts that have moved into the development and integration of hardware (and software) for field experiments and tests. XAI
  • 17. 1818 C: Explainable AI – Challenge Problem Areas Learn a model to perform the task Data Analytics Classification Learning Task Recommend Explanation Explainable Model Explanation Interface An analyst is looking for items of interest in massive multimedia data sets Explainable Model Explanation Interface Autonomy Reinforcement Learning Task Actions Explanation ArduPilot & SITL Simulation An operator is directing autonomous systems to accomplish a series of missions Classifies items of interest in large data set Learns decision policies for simulated missions Explains why/why not for recommended items Analyst decides which items to report, pursue Explains behavior in an after-action review Operator decides which future tasks to delegate Use the explanation to perform a task Explain decisions, actions to the user Multimedia Data Two trucks performing a loading activity ©Getty Images ©US Army ©Air Force Research Lab ©ArduPikot.org
  • 18. 2020 • Machine learning to classify items, events, or patterns of interest o In heterogeneous, multimedia data o Include structured/semi-structured data in addition to images and video o Require meaningful explanations that are not obvious in video alone • Proposers should describe: o Data sets and training data (including background knowledge sources) o Classification function to be learned o Types of explanations to be provided o User decisions to be supported • Challenge problem progression o Describe an appropriate progression of test problems to support your development strategy C.1 Data Analysis
  • 19. 2121 • Reinforcement learning to learn sequential decision policies o For a simulated autonomous agent (e.g., UAV) o Explanations may cover other needed planning, decision, or control modules, as well as decision policies learned through reinforcement learning o Explain high level decisions that would be meaningful to the end user (i.e., not low level motor control) • Proposers should describe: o Simulation environment o Types of missions to be covered o Decision policies and mission tasks to be learned o Types of explanations to be provided o User decisions to be supported • Challenge problem progression o Describe an appropriate progression of test problems to support your development strategy C.2 Autonomy
  • 20. 2222 • XAI developers are presented with a problem domain • Apply machine learning techniques to learn an explainable model • Combine with the explanation interface to construct an explainable system • The explainable system delivers and explains decisions or actions to a user who is performing domain tasks • The system’s decisions and explanations contribute (positively or negatively) to the user’s performance of the domain tasks • The evaluator measures the learning performance and explanation effectiveness • The evaluator also conducts evaluations of existing machine learning techniques to establish baseline measures for learning performance and explanation effectiveness C.3 Evaluation – Evaluation Sequence
  • 21. 2323 C.3 Evaluation – Evaluation Framework Explanation Framework Task Decision Recommendation, Decision or Action Explanation The system takes input from the current task and makes a recommendation, decision, or action The system provides an explanation to the user that justifies its recommendation, decision, or action The user makes a decision based on the explanation Explainable Model Explanation Interface XAI System Measure of Explanation Effectiveness User Satisfaction • Clarity of the explanation (user rating) • Utility of the explanation (user rating) Mental Model • Understanding individual decisions • Understanding the overall model • Strength/weakness assessment • ‘What will it do’ prediction • ‘How do I intervene’ prediction Task Performance • Does the explanation improve the user’s decision, task performance? • Artificial decision tasks introduced to diagnose the user’s understanding Trust Assessment • Appropriate future use and trust Correctablity (Extra Credit) • Identifying errors • Correcting errors, Continuous training
  • 22. 2424 • TA1: Explainable Learners o Multiple TA1 teams will develop prototype explainable learning systems that include both an explainable model and an explanation interface • TA2: Psychological Model of Explanation o At least one TA2 team will summarize current psychological theories of explanation and develop a computational model of explanation from those theories D. Technical Areas and Evaluation Challenge Problem Areas Evaluation Framework Evaluator TA 2: Psychological Model of Explanation TA 1: Explainable Learners Autonomy ArduPilot & SITL Simulation Data Analytics Multimedia Data Explanation Measures • User Satisfaction • Mental Model • Task Performance • Trust Assessment • Correctability Learning Performance Explanation Effectiveness • Psych. Theory of Explanation • Computational Model • Consulting Teams that provide prototype systems with both components: • Explainable Model • Explanation Interface Deep Learning Teams Interpretable Model Teams Model Induction Teams
  • 23. 2525 D: Challenge Problem Candidates Analytics Autonomy Starcraft2 ELF-MiniRTSMovieQA CLEVR ActivityNet ArduPilot Driving Simulator Visual Question Answering Activity Recognition Strategy Games Vehicle Control Distribution Statement "A" (Approved for Public Release, Distribution Unlimited)
  • 24. 2929 Schedule and Milestones • Technical Area 1 Milestones: • Demonstrate the explainable learners against problems proposed by the developers (Phase 1) • Demonstrate the explainable learners against common problems (Phase 2) • Deliver software libraries and toolkits (at the end of Phase 2) APR MAY JUN JUL AUG SEP OCT NOV DEC JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC JAN FEB MAR APR MAY KickOff Progress Report Tech Demos Eval 1 Results Eval 2 Results Final Meetings Refine & Test Explainable Learners (against common problems) Eval 3 Deliver Software Toolkits TA 2 Summarize Current Psychological Theories of Explanation Develop Computational Model of Explanation Refine & Test Computational Model Deliver Computational Model TA 1 Develop & Demonstrate Explainable Models (against proposed problems) Eval 1 Refine & Test Explainable Learners (against common problems) Eval 2 Evaluator Define Evaluation Framework Prep for Eval 1 Eval 1 Analyze Results 2021 PHASE 1: Technology Demonstrations PHASE 2: Comparative Evaluations Prep for Eval 2 2017 2018 2019 2020 Eval 2 Analyze Results Prep for Eval 3 Eval 3 Analysze Results & Accept Toolkits
  • 25. 3131 Schedule and Milestones – Phase 1 APR MAY JUN JUL AUG SEP OCT NOV DEC JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV KickOff Progress Report Tech Demos Evaluator Define Evaluation Framework Prep for Eval 1 Eval 1 2017 2018 PHASE 1: Technology Demonstrations Meetings TA 2 Summarize Current Psychological Theories of Explanation Develop Computational Model TA 1 Develop & Demonstrate Explainable Models (against proposed problems) Eval 1
  • 26. 3434 CP Performer Explainable Model Explanation Interface Both UC Berkeley Deep Learning Reflexive & Rational Charles River Causal Modeling Narrative Generation UCLA Pattern Theory+ 3-level Explanation Autonomy Oregon State Adaptive Programs Acceptance Testing PARC Cognitive Modeling Interactive Training CMU Explainable RL (XRL) XRL Interaction Analytics SRI International Deep Learning Show & Tell Explanation Raytheon BBN Deep Learning Argumentation & Pedagogy UT Dallas Probabilistic Logic Decision Diagrams Texas A&M Mimic Learning Interactive Visualization Rutgers Model Induction Bayesian Teaching XAI Developers (TA1)
  • 27. 3838 Explanation by Selection of Teaching Examples (Rutgers) BAYESIAN TEACHING for optimal selection of examples for machine explanation TRAINING DATA because it is similar to these examples and dissimilar to these examples brow lowered lips thinned/ pushed out nostrils flared cheekbones raised mouth raised chin pushed out/up EXPLAINABLE CLASSIFICATION MODEL This face is Angry
  • 28. 3939 Autonomy (PARC, OSU) Common Ground Builder • Explain • Train • Evaluate Robotics Curriculum An interactive sensemaking system to explain the learned performance capabilities of a UAS flying in an ArduPilot simulation testbed Tools for explaining deep adaptive programs and discovering best principles for designing explanation user interfaces xNN Annotation Aware Reinforcement Learning Explanation Learner Deep Adaptive Program Saliency Visualizer Game Engine Interactive Naming Interface xFSM Visual Words Common Ground Learning and Explanation (COGLE) Explanation-Informed Acceptance Testing of Deep Adaptive Programs (xACT) Decision Net
  • 29. 4040 • Xerox PARC COGLE (Multi-million $ contract) o COmmon Ground Learning and Explanation o Machine Learning research Reinforcement Learning (RL) • University of Edinburgh • University of Michigan o Cognitive modeling research “Game of Drones” • Carnegie Mellon University (CMU) • West Point (Army) o User Modeling and Evaluation • Florida Institute for Human & Machine Cognition Challenge Participants: PARC
  • 30. 4444 Summary: A reasonable goal? A good solution
  • 31. 4545 45 Quiz and Questions • Kahoot Jumble – https://create.kahoot.it/create#/jumble/3 – Begynner med et oppvarmingsspørsmål • Comments or Questions? Easy reading, with more sources: http://www.ccri.com/2016/11/10/explainable-artificial-intelligence-xai/ https://hackernoon.com/what-is-explainable-ai-how-does-it-affect-your-job-fcfa418bfce1
  • 32. 4646 Team Characteristics and Details • More details about the 13 teams that joined the challenge
  • 33. 4747 Deeply Explainable Artificial Intelligence Berkeley/BU/U. Amsterdam/Kitware • ArduPilot and OpenAI Gym Simulations • Explain implicit (latent) nodes by training additional DL models • Explain explicit nodes thru Neural Module Networks (NMNs). • Reflexive explanations (that arise directly from the model) • Rational explanations (that come from reasoning about user’s beliefs) Explainable Model Explanation Interface Challenge Problem • Pieter Abbeel (Berkeley) • Tom Griffiths (Berkeley) • Kate Saenko (BU) • Zeynep Akata (U. Amsterdam) • Dan Klein (Berkeley) • John Canny (Berkeley) • Anca Dragan (Berkeley) • Anthony Hoogs (Kitware) • PI: Trevor Darrell (Berkeley) Deep Learning Reflexive & Rational Autonomy Data Analytics • Visual QA and Multimedia Event QA
  • 34. 4848 CAMEL: Causal Models to Explain Learning CRA/U. Mass/Brown • Minecraft, Starcraft • Experiment with the learned model (as a grey box) to learn an explainable, causal, probabilistic programming model • Interactive visualization based on the generation of temporal, spatial narratives from the causal, probabilistic models Explainable Model Explanation Interface Challenge Problem • Avi Pfeffer (CRA) • David Jensen (U. Mass) • Michael Littman (Brown) • PI: Brian Ruttenberg (CRA) Model Induction Causal Models Narrative Generation Autonomy Data Analytics • Pedestrian Detection (INRIA), Activity Recognition (ActivityNet) • James Niehaus (CRA) • Emilie Roth (Roth Cognitive Engineering) • Joe Gorman(CRA) • James Tittle (CRA)
  • 35. 4949 Learning and Communicating Explainable Representations for Analytics and Autonomy UCLA/OSU/Michigan State • Integrated representation across an entropy spectrum: • Deep Neural Nets • Stochastic And-Or- Graphs (AOG) • Predicate Calculus • Integrate 3 levels of explanation: • Concept compositions • Causal and counterfactual reasoning • Utility explanations Explainable Model Explanation Interface Challenge Problem • Ying Nian Wu (UCLA) • Sinisa Todorovic (OSU) • PI: Song-Chun Zhu (UCLA) Pattern Theory+ 3-Level Explanation Autonomy Data Analytics • Understanding complex multimedia events • Joyce Chai (Michigan State) • Humanoid robot behavior and VR simulation platform
  • 36. 5050 xACT: Explanation-Informed Acceptance Testing of Deep Adaptive Programs OSU • Real-Time Strategy Games based on custom designed game engine designed to support explanation • Possible use of Starcraft • Explainable Deep Adaptive Programs (xDAPs) – a new combination of Adaptive Programs, Deep Learning and explainability • Provides a visual & NL explanation interface for acceptance testing by test pilots based on Information Foraging Theory Explainable Model Explanation Interface Challenge Problem • Tom Dietterich (OSU) • Fuxin Li (OSU) • Prasad Tadepalli (OSU) • Weng-Keen Wong (OSU) • Margaret Burnett (OSU) • Martin Erwig (OSU) • Liang Huang (OSU) • PI: Alan Fern (OSU) Adaptive Programs Acceptance Testing Autonomy
  • 37. 5151 COGLE: Common Ground Learning and Explanation PARC/CMU/U. Edinburgh/U. Mich./West Point • ArduPilot simulation environment • Value of Explanation (VoE) framework for measuring explanation effectiveness • 3-layer architecture: • Learning Layer (DNNs) • Cognitive Layer (ACT-R Cog. Model) • Explanation Layer (HCI) • Interactive visualization of states, actions, policies & values • Includes a module for test pilots to refine and train the system Explainable Model Explanation Interface Challenge Problem • Sricharan Kumar (PARC) • Honglak Lee (U. Mich.) • Subramanian Ramamoorthy (U. Edinburgh) • Christian Lebiere (CMU) • John Anderson (CMU) • Robert Thomson (USMA) • Michael Youngblood (PARC) • PI: Mark Stefik (PARC) Cognitive Model Interactive Training Autonomy
  • 38. 5252 XRL: Explainable Reinforcement Learning for AI Autonomy CMU/Stanford • Open AI Gym • Autonomy in the electrical grid • Mobile service robots • Self-improving educational software • Create a new scientific discipline for Explainable Reinforcement Learning with work on new algorithms and representations • Interactive explanations of dynamic systems • Human-machine interaction to improve performance Explainable Model Explanation Interface Challenge Problem • Zico Kolter (CMU) • Pradeep Ravikumar (CMU) • Manuela Veloso (CMU) • Emma Brunskill (Stanford) • PI: Geoff Gordon (CMU) XRL Models XRL Interaction Autonomy
  • 39. 5353 DARE: Deep Attention-based Representations for Explanation SRI/U. Toronto/UCSD/U. Guelph • Visual Question Answering (VQA) using Visual Gnome, Flickr30 • MovieQA • Multiple deep learning techniques: • Attention-based mechanisms • Compositional NMNs • GANs • DNN visualization • Query evidence that explains DNN decisions • Generate natural language justifications Explainable Model Explanation Interface Challenge Problem • Richard R. Zemel (U. Toronto) • Sanja Fidler (U. Toronto) • David Duvenaud (U. Toronto) • Graham Taylor (U. Guelph) • Jürgen Schulze (UCSD) • PIs: Giedrius Burachas (SRI), Mohamed Amer (SRI) Deep Learning Show-and-Tell Explanations Data Analytics • Shalini Ghosh (SRI) • Avi Ziskind (SRI) • Michael Wessel (SRI)
  • 40. 5454 EQUAS: Explainable QUestion Answering System Raytheon BBN/GA Tech /UT Austin/MIT • Visual Question Answering (VQA), beginning with images and progressing to video • Semantic labelling of DNN neurons • DNN audit trail construction • Gradient-weighted Class Activation Mapping • Comprehensive strategy based on argumentation theory • NL generation • DNN visualization Explainable Model Explanation Interface Challenge Problem • Devi Parikh (GA Tech) • Dhruv Batra (GA Tech) • PI: William Ferguson (Raytheon BBN) Deep Learning Argumentation Theory Data Analytics • Antonio Torralba (MIT) • Ray Mooney (UT Austin)
  • 41. 5555 Tractable Probabilistic Logic Models: A New, Deep Explainable Representation UT Dallas/UCLA/Texas A&M/IIT-Delhi • Infer activities in multimodal data (video and text) • Using the Wetlab (biology) and TACoS (cooking) datasets • Tractable Probabilistic Logic Models (TPLMs) – an important class of (non-deep learning) interpretable models • Enables users to explore and correct the underlying model as well as add background knowledge Explainable Model Explanation Interface Challenge Problem • PI: Vibhav Gogate (UT Dallas) Probabilistic Logic Probabilistic Decision Diagrams Data Analytics • Adnan Darwiche (UCLA) • Guy Van Den Broeck (UCLA) • Nicholas Ruozzi (UT Dallas) • Eric Ragan (Texas A&M) • Parag Singla (IIT-Delhi)
  • 42. 5656 Transforming Deep Learning to Harness the Interpretability of Shallow Models: An Interactive End-to-End System Texas A&M/Wash. State • Multiple tasks using data from Twitter, Facebook, ImageNet, UCI, NIST and Kaggle • Metrics for explanation effectiveness • Develop a mimic learning framework that combines deep learning models for prediction and shallow models for explanations • Interactive visualization over multiple views, using heat maps & topic modeling clusters to show predictive features Explainable Model Explanation Interface Challenge Problem • Eric Ragan (Texas A&M) • PI: Xia Hu (Texas A&M) Mimic Learning Interactive Visualization Data Analytics • Shuiwang Ji (Wash. State)
  • 43. 5757 Model Explanation by Optimal Selection of Teaching Examples Rutgers • Movie descriptions • Image processing • Caption data • Movie events • Human motion events • Select the optimal training examples to explain model decisions based on Bayesian Teaching • Example-based explanation of: • the full model • user-selected sub- structure • user submitted examples Explainable Model Explanation Interface Challenge Problem • PI: Patrick Shafto (Rutgers) Model Induction Bayesian Teaching Data Analytics • Scott Cheng-Hsin Yang (Rutgers)
  • 44. 5858 Naturalistic Decision Making Foundations of Explainable AI IHMC/MacroCognition/Michigan Tech • Conduct interactive assessment and formal human experiments • Validate the model • Develop metrics of explanation effectiveness • Extensive review of relevant psychological theories • Extend the theory of Naturalistic Decision Making to cover explanation • Represent reductionist mental models that humans develop as part of the explanatory process • Including mental simulation • Gary Klein (MacroCognition) • Shane T. Mueller (Michigan Tech) • Jordan Litman (IHMC Psychometrician) • Simon Attfield (Middlesex University-London) • Peter Pirolli (IHMC) • PI: Robert R. Hoffman (IHMC) Naturalistic Theory Bayesian Framework Experiments • William J. Clancey (IHMC) • COL Timothy M. Cullen (SAASS) Literature Review Computational Model Model Validation
  • 45. 5959 XAI Evaluation NRL • Evaluation protocols • Training environment • Training data • Simulation environ. • Testing environment • Subjects • Web infrastructure • Baseline systems Challenge Problems Evaluation Framework Measurement • Mike Pazzani (UC Riverside) • PI: David Aha (NRL) Analytics • Justin Karneeb (Knexus) • Matt Molineaux (Knexus) • Leslie Smith (NRL) Two trucks performing a loading activity Autonomy Explanation Effectiveness LearningPerformance Today Tomorrow

Editor's Notes

  1. Welcome
  2. The briefing will follow the outline of the BAA (Broad Agency Announcements)
  3. Dramatic success in machine learning has led to an explosion of new AI capabilities. These systems offer tremendous benefits, but their effectiveness will be limited by the machine’s inability to explain its decisions and actions to human users. This is especially important for the Department of Defense (DoD), who is facing challenges that demand the development of more intelligent, autonomous, and symbiotic systems. Explainable AI will be essential if users are to understand, appropriately trust, and effectively manage this incoming generation of artificially intelligent partners. By creating new machine learning methods to produce more explainable models and combining them with explanation techniques, XAI aims to help users understand, appropriately trust, and effectively manage the emerging generation of AI systems
  4. Research Focus How will we focus the research? This chart illustrates two general challenge problem areas we will ask researchers to address: One, a classification problem to classify events of interest in multimedia data the second, a reinforcement learning problem to train an autonomous UAV to perform a variety of missions The basic framework for either problem is the following: The performers are asked to train their system, thru machine learning, to solve problems in that domain The system produced would contain an explainable model and an explanation interface The trained system will then be tested by surrogate users who will use the system—and its explanations—to perform a related task In the data analytics task, users would need to decide which recommendations to pursue and which recommendations were false alarms In the autonomy task, users would need to decide how and when to use the autonomous system on future missions We will then evaluate both the learning performance and the explanation performance the system In the BAA I will ask each bidder to propose their own test problem in one or both of these areas During the first year of the program, I will asks the evaluator to design a set of common test problems and metrics in each area
  5. We frame explanation in terms of sensemaking and common ground. Herb Clark’s term “common ground” was developed to describe how it is that peoplecollaborating must establish common vocabulary and understanding before they can work together effectively. His research studied how people engage in a nuanced conversational dialog and negotiate the meanings of terms through their discourse. Each party observes that world and develops representations in its own mind. We are interested in enabling common ground between people and machine-learning systems. Rather than requiring computers to master natural language, we use a shared database as external memory for common ground in human + computer teams. Because sharing of experience via common ground is 2-way, it can enable humans to understand what machines have learned, and also enable machines to understand what humans have learned. In COGLE, the human users and COGLE operate on a common and observable world. For COGLE, that world is a simulated world (“Game of Drones”). The database has representations of actions (e.g., “turning left” or “landing”), domain features (e.g. ,”mountains” or “lost hikers”), goals (e.g., “find the lost hiker” or “drop a package” or “use fuel efficiently”) and also abstractions of these for conceptual economy in reasoning (e.g., “taking the shortest route” or “efficient foraging patterns” or “avoiding obstacles”).  Each party translates its internal representation to and from representations in the external memory. The humans and computers learn from each other through demonstrations and explorations using the external memory. Given the different cognitive abilities of humans and machines, we expect human plus computer teams with common ground to work better and learn faster than humans or machines alone.
  6. Theories of both argument construction and interactive pedagogy suggest four modes of explanation