This document discusses advanced tree-based machine learning methods including bagging, random forests, and boosting. Bagging involves resampling data and growing trees on each sample to average predictions and reduce variance compared to a single tree. Random forests build on bagging by randomly selecting features at each split. Boosting fits trees sequentially to emphasize training examples that previous trees misclassified to produce a stronger learner. These ensemble methods aggregate multiple tree models to improve over a single decision tree.