This document compares and contrasts boosting with other ensemble methods such as bagging and random forests. It discusses two specific boosting algorithms - AdaBoost, which fits models on weighted labels, and gradient boosting, which fits models on residuals from previous models. Both aim to produce low bias, low variance predictions by building models sequentially. The document provides pseudocode for AdaBoost classification and gradient boosting regression, and explains how boosting methods work to improve upon previous predictions at each step of the ensemble.