This document summarizes a team's analysis of the Springleaf lending dataset from a Kaggle competition. The team tested several classification methods including logistic regression, random forest, XGBoost, and stacking. Their best performing model was an XGBoost stacking ensemble that achieved an accuracy of 81.1% with 29.2% accuracy on the minority class. Through extensive data preprocessing and hyperparameter tuning, the team was able to improve on the winner's publicly reported accuracy of 80.4% despite their final result being 79.5%.