This document summarizes a paper on Bootstrap Your Own Latent (BYOL), an unsupervised contrastive learning method that does not use negative pairs. BYOL trains a target network to predict the output of an online network using a different data augmentation. The loss is the mean squared error between the predictions. BYOL achieves state-of-the-art performance on several image classification benchmarks without negative pairs by bootstrapping representations from its own augmented views. Ablation studies show BYOL is robust to different augmentations and batch sizes but requires careful tuning of the target network update rate.