This document provides an introduction to random forests, which are an ensemble machine learning method for classification and regression. Random forests build on decision trees but average multiple tree predictions to improve accuracy over a single tree. Each tree is constructed using a random sample of data and random subsets of features. This introduces variability that improves predictive performance compared to single trees or bagged trees that use all features. The document outlines the key characteristics and advantages of random forests, such as high accuracy, ability to handle large datasets with many variables, and resistance to overfitting.