This document summarizes a machine learning course on kernel machines. The course covers feature maps that transform data into higher dimensional spaces to allow nonlinear models to fit complex patterns. It discusses how kernel functions can efficiently compute inner products in these transformed spaces without explicitly computing the feature maps. Specifically, it shows how support vector machines, linear regression, and other algorithms can be kernelized by reformulating them to optimize based on inner products between examples rather than model weights.