This document discusses interpreting machine learning models and summarizes techniques for interpreting random forests. Random forests are considered "black boxes" due to their complexity but their predictions can be explained by decomposing them into mathematically exact feature contributions. Decision trees can also be interpreted by defining the prediction as a bias plus the contributions from each feature along the decision path. This operational view of decision trees can be extended to interpret random forest predictions despite their complexity.