This document discusses score-based generative models. It explains that score-based models estimate the score function or gradient of the data distribution rather than the probability density function. This allows flexible deep learning architectures to be used without normalization. Score functions can be estimated using methods like score matching, sliced score matching, and denoising score matching. Models can then generate samples via Langevin dynamics. Challenges include accurately modeling low density regions and recovering relative weights between modes. Noise Conditional Score Networks address these by modeling perturbed distributions. Score-based models can also solve inverse problems without retraining.