This document discusses bias-variance dichotomy in machine learning. It defines bias as the difference between predicted and actual values, and variance as how much predictions scatter relative to each other. High bias means underfitting data while high variance means overfitting. The goal is to minimize both for the best model. Examples show that low bias and low variance leads to the best accuracy on both training and testing data, while high bias or variance for either leads to worse accuracy on testing data even if training accuracy is high.