This talk was recorded in NYC on October 22nd, 2019 and can be viewed here: https://youtu.be/ZGSEDv8hqHY The Case for Model Debugging Prediction by machine learning models is fundamentally the execution of computer code. Like all good code, machine learning models should be debugged for logical or runtime errors or for security vulnerabilities. Recent, high-profile failures have made it clear that machine learning models must also be debugged for disparate impact across demographic segments and other types of sociological bias. Model debugging enhances trust in machine learning directly by increasing accuracy in new or holdout data, by decreasing or identifying hackable attack surfaces, or by decreasing sociological bias. As a side-effect, model debugging should also increase understanding and explainability of model mechanisms and predictions. This presentation outlines several standard and newer model debugging techniques and proposes several potential remediation methods for any discovered bugs. Discussed debugging techniques include adversarial examples, benchmark models, partial dependence and individual conditional expectation, random attacks, Shapley explanations of predictions and residuals, and models of residuals. Proposed remediation approaches include alternate models, editing of deployable model artifacts, missing value injection, prediction assertions, and regularization methods. Bio: Patrick Hall is the Senior Director of Product at H2O.ai where he focuses mainly on model interpretability. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Prior to joining H2O.ai, Patrick held global customer-facing roles and research and development roles at SAS Institute.