Creating better models is a critical component to building a good data science product. It is relatively easy to build a first-cut machine-learning model, but what does it take to build a reasonably good or state-of-the-art model? Ensemble models—which help exploit the power of computing in searching the solution space. Ensemble methods aren’t new. They form the basis for some extremely powerful machine learning algorithms like random forests and gradient boosting machines. The key point about ensemble is that consensus from diverse models are more reliable than a single source. Amit and Bargava discusses various strategies to build ensemble models, demonstrating how to combine model outputs from various base models (logistic regression, support vector machines, decision trees, etc.) to create a stronger, better model output. Using an example they covers bagging, boosting, voting and stacking to explore where ensemble models can consistently produce better results when compared against the best-performing single models.