Automated Machine Learning and eXplainable Artificial Intelligence are disruptive technologies in Data Science. Here I briefly introduce them and show how DALEXverse may be used in better model development.
8. Defaults (package
defaults (Def.P) and
optimal defaults
(Def.O)),
tunability of the
hyperparameters
with the package
defaults (Tun.P) and
our optimal defaults
(Tun.O) as reference
and tuning space
quantiles (q0.05 and
q0.95) for different
parameters of the
algorithms
17. • “You don’t see a lot of skepticism,” she says. “The algorithms are like shiny new
toys that we can’t resist using. We trust them so much that we project meaning on to
them.”
• Ultimately algorithms, according to O’Neil, reinforce discrimination and widen
inequality, “using people’s fear and trust of mathematics to prevent them from
asking questions”.
https://www.theguardian.com/books/2016/oct/27/cathy-oneil-weapons-of-math-
destruction-algorithms-big-data !17
Cathy O'Neil:
The era of blind faith
in big data must end
black boxes
Why do we need explanations for complex models?
22. !22
Local Model approximations
"Why Should I Trust You?" Explaining the Predictions of Any Classifier.
Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin (2016). https://arxiv.org/pdf/1602.04938.pdf
Port to R: Thomas Lin Pedersen (2017) https://github.com/thomasp85/lime
Other implementations: live (Staniak, Biecek 2018) and iml (Molnar 2018)
A different approach to model explanation is to locally approximate
the complex black-box model with an easier to interpret white-box
model constructed on interpretable features.
24. Biecek P (2018). “DALEX: Explainers for Complex Predictive Models in R.”
Journal of Machine Learning Research, 19(84), 1-5. URL:http://jmlr.org/papers/v19/18-416.html>
25. Chat bot dla wyjaśnień
https://kmichael08.github.io
26. Chat bot dla wyjaśnień
https://kmichael08.github.io
What If?
Why?
33. iBreakDown: Uncertainty of Model Explanations for Non-additive Predictive Models
Alicja Gosiewska, Przemyslaw Biecek (2019) https://arxiv.org/abs/1903.11420v1
34. SHAP (SHapley Additive exPlanations) Lundberg (2017)
IME complexity is O(2
p
). Shapley values are known for some
time and we have methods to approximate them efficiently.