This document summarizes several papers on semi-supervised learning methods published between 2017-2019. It describes techniques such as consistency regularization, which encourages consistent predictions from unlabeled data after augmentation; entropy minimization, which reduces the entropy of label distributions; and adversarial training, which adds random perturbations to make models smooth. Papers discussed include MixMatch, Mean Teachers, Virtual Adversarial Training, and S4L.