This document discusses the importance of machine learning interpretability for enterprise adoption of ML. It begins with a brief history of AI and ML, noting that while adoption has increased, most companies are still in the exploration phase and have not deployed ML models into production. Regulators and humans require explanations for model predictions. The document then outlines different levels and approaches for ML interpretability, including enforcing constraints before building models, producing global explanations after model building using techniques like partial dependence plots, and generating local explanations for individual predictions using methods like LIME and Shapley additive explanations. It emphasizes that interpretability allows for more robust, fair and transparent models.