The document discusses various methods for explaining AI model predictions, including LIME and SHAP. LIME explains individual predictions by approximating them with an interpretable model, while SHAP connects game theory and local explanations by using Shapley values. The document also mentions educational projects on filtering academic papers and human-machine interactions, emphasizing the need for explainable user interfaces and agent strategy summarization.
12. “Why should I trust you?” - Explaining the Predictions of
Any Classifier
LIME: Local Interpretable Model-Agnostic Explanations
13. LIME
- Explains the model result
- Enhances User Trust
Supports Multi-class classifications, (example),
text documents, images, etc.
Packages for R, Python, etc.
As seen on PwC talk & workshop!
14. SHAP - NIPS`17
SHapley Additive exPlanations
“A Unified Approach to Interpreting
Model Predictions” - Lundberg & Lee
“Connects game theory
with local explanations”
SHAP Value - Feature importance as
an impact (effect) on the output
15. SHAP - NIPS`17
Unifies:
- LIME
- Shapley sampling values
- Shapley regression values
- DeepLIFT
.
.
.
better consistency with human
intuition
Simon - Craftworks
26. Human-Machine Interactions - Ofra Amir
PhD, Harvard University
Intelligent Interactive Systems
(advanced topics in information systems)
- Technion, Israel
AAMAS'18:
- HIGHLIGHTS: Summarizing Agent Behavior to
People
- Agent Strategy Summarization
29. Explain results
- Keywords Contribution:
Positive / Negative
- Encourage User Feedback
Model adaptation
- Stronger Engagement
30. What’s in RNN Vector?
http://u.cs.biu.ac.il/~yogo/blackbox2018.pdf
31. What’s in RNN Vector?
http://u.cs.biu.ac.il/~yogo/blackbox2018.pdf
As mentioned by Vered Schwarz
deeper at DSI_UniWien - 12.12
32. Explainable AI
- Models are prune to make mistakes
- Interpretability to the rescue!
- Be Creative: Explainable UI
Future Research:
- Effective Agent Strategy Summary
- Expected Behavior under Different Conditions