9. EU GDPR
n GDPR-22
1. The data subject shall have the right not to be subject to a decision based
solely on automated processing, including profiling, which produces legal
effects concerning him or her or similarly significantly affects him or her.
2. Paragraph 1 shall not apply if the decision: is necessary for entering into, or
performance of, a contract between the data subject and a data controller; is
authorised by Union or Member State law to which the controller is subject
and which also lays down suitable measures to safeguard the data subject’s
rights and freedoms and legitimate interests; or is based on the data subject’s
explicit consent.
3. In the cases referred to in points (a) and (c) of paragraph 2, the data controller
shall implement suitable measures to safeguard the data subject’s rights and
freedoms and legitimate interests, at least the right to obtain human
intervention on the part of the controller, to express his or her point of view
and to contest the decision.
4. Decisions referred to in paragraph 2 shall not be based on special categories of
personal data referred to in Article 9(2)1), unless point (a) or (g) of Article 9(2)
applies and suitable measures to safeguard the data subject’s rights and
freedoms and legitimate interests are in place.
9
10. n 2016
• ICML, NIPS
n
• , ,
Vol.33, No.3, pages 366--369, 2018.
• , Qiita
10
11. n AI
11
Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI)
https://ieeexplore.ieee.org/document/8466590/
7 SCOPUS,
IEEExplore, ACM Digital Library, Google
Scholar, Citeseer Library, ScienceDirect,
arXiv
“intelligible”,
“interpretable”, “transparency”, “black box”,
“understandable”, “comprehensible”,
“explainable” AI
“Artificial Intelligence”, “Intelligent
system”, “Machine learning”, “deep learning”,
“classifier” , “decision tree”
16. n
n Why Should I Trust You?: Explaining the Predictions of
Any Classifier, KDD'16 [Python LIME; R LIME]
n A Unified Approach to Interpreting Model Predictions,
NIPS'17 [Python SHAP]
n Anchors: High-Precision Model-Agnostic Explanations,
AAAI'18 [Python Anchor]
n Understanding Black-box Predictions via Influence
Functions, ICML’17 [Python influence-release]
16
17. n
n Born Again Trees
n Making Tree Ensembles Interpretable: A Bayesian Model
Selection Approach, AISTATS'18 [Python defragTrees]
17
18. n
n [Python+Tensorflow
saliency; DeepExplain]
• Striving for Simplicity: The All Convolutional Net
(GuidedBackprop)
• On Pixel-Wise Explanations for Non-Linear Classifier Decisions
by Layer-Wise Relevance Propagation (Epsilon-LRP)
• Axiomatic Attribution for Deep Networks (IntegratedGrad)
• SmoothGrad: Removing Noise by Adding Noise (SmoothGrad)
• Learning Important Features Through Propagating Activation
Differences (DeepLIFT)
18
20. n
n Why Should I Trust You?: Explaining the Predictions of
Any Classifier, KDD'16 [Python LIME; R LIME]
n A Unified Approach to Interpreting Model Predictions,
NIPS'17 [Python SHAP]
n Anchors: High-Precision Model-Agnostic Explanations,
AAAI'18 [Python Anchor]
n Understanding Black-box Predictions via Influence
Functions, ICML’17 [Python influence-release]
20
57. n
n [Python+Tensorflow
saliency; DeepExplain]
• Striving for Simplicity: The All Convolutional Net
(GuidedBackprop)
• On Pixel-Wise Explanations for Non-Linear Classifier Decisions
by Layer-Wise Relevance Propagation (Epsilon-LRP)
• Axiomatic Attribution for Deep Networks (IntegratedGrad)
• SmoothGrad: Removing Noise by Adding Noise (SmoothGrad)
• Learning Important Features Through Propagating Activation
Differences (DeepLIFT)
57
63. n ! = # $
n $
n [Simonyan et al., arXiv’14]
$%
&' (
&()
•
→ →
*+ ,
*,-
→
•
→ →
&' (
&()
→
63
64. n [Simonyan et al., arXiv’14]
!"
#$ %
#%&
n
• GuidedBP [Springenberg et al., arXiv’14]
back propagation
• LRP [Bach et al., PloS ONE’15]
• IntegratedGrad [Sundararajan et al., arXiv’17]
• SmoothGrad [Smilkov et al., arXiv’17]
• DeepLIFT [Shrikumar et al., ICML’17]
64
65. n
n
[Python+Tensorflow saliency; DeepExplain]
• Striving for Simplicity: The All Convolutional Net
(GuidedBackprop)
• On Pixel-Wise Explanations for Non-Linear Classifier Decisions
by Layer-Wise Relevance Propagation (Epsilon-LRP)
• Axiomatic Attribution for Deep Networks (IntegratedGrad)
• SmoothGrad: Removing Noise by Adding Noise (SmoothGrad)
• Learning Important Features Through Propagating Activation
Differences (DeepLIFT)
65
69. defragTrees
n Making Tree Ensembles Interpretable: A Bayesian Model
Selection Approach, AISTATS'18 [Python defragTrees]
•
•
n
69
when
Relationship ≠ Not-in-family, Wife
Capital Gain < 7370
when
Relationship ≠ Not-in-family
Capital Gain >= 7370
when
Relationship ≠ Not-in-family, Unmarried
Capital Gain < 5095
Capital Loss < 2114
when
Relationship = Not-in-family
Country ≠ China, Peru
Capital Gain < 5095
when
Relationship ≠ Not-in-family
Country ≠ China
Capital Gain < 5095
when
Relationship ≠ Not-in-family
Capital Gain >= 7370
…
…
103. 1.
n Thaliana gene expression data (Atwell et al. ’10):
• ! ∈ ℝ$%&%'( 2
• ) ∈ ℝ
• 134
104. 2.
n 20 Newsgroups Data (Lang’95); ibm vs mac
• ! ∈ ℝ$$%&' tf-idf
• ( ∈ {ibm, mac} 2
• 1168
→
bios drive ibm
ide drive ibm
dos os, drive ibm
controller drive ibm
quadra, centris 040, clock mac
windows, bios, controller disk, drive ibm
bios, help, controller disk, drive ibm
centris, pc 610 mac