Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.
Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.
Published on
Interpreting Black-Box Models with Applications to Healthcare: Complex and highly interactive models such as Random Forests, Gradient Boosting, and Deep Neural Networks demonstrate superior predictive power compared to their high-bias counterparts, Linear and Logistic Regression. However, these more complex and sophisticated methods lack the interpretability of the simpler alternatives. In some areas of application, such as healthcare, model interpretability is crucial both to build confidence in the model predictions as well as to explain the results on individual cases. This talk will discuss recent approaches to explaining “black-box” models and demonstrate some recently developed tools that aid this effort.
Login to see the comments