As machine learning has become more widely adopted across many industries and involved in many aspects of decision making, machine learning interpretability is therefore becoming an integral part of the data scientist workflow and can no longer just be an afterthought. Ultimately, it’s reasonable to wonder whether we can understand and trust decisions made by a predictive model.
However, in an increasingly competitive environment, data scientists are using ever-complex machine learning algorithms like XGBoost or Deep Learning to deliver more accurate models to businesses. Unfortunately, there is a fundamental tension between accuracy and interpretability: the most accurate models are often the hardest to understand. Opaque and complicated nonlinear models limit trust and transparency, slowing adoption of machine learning models in high regulated industries like banking, healthcare and insurance. But things needn't be that way!
In this talk, Leonardo Noleto, senior data scientist at Bleckwen, will explore the vibrant area of machine learning interpretability and explain how to understand the inner-workings of black-box models, thanks to interpretability techniques. Along the way, Leonardo offers an overview of interpretability and the trade-offs among various approaches of making machine learning models interpretable. Leonardo concludes with a demonstration of open source tools like LIME and SHAP.