This document summarizes key concepts from a PhD dissertation on uncertainty in deep learning:
1) There are two types of uncertainties - epistemic uncertainty from lack of knowledge that decreases with more data, and aleatoric uncertainty from inherent noise that cannot be reduced. Deep learning models need to estimate both to provide predictive uncertainty.
2) Variational inference allows approximating intractable Bayesian posteriors by minimizing the KL divergence between an approximating distribution and the true posterior. Dropout can be seen as a Bayesian approximation where weights follow a Bernoulli distribution.
3) With dropout as a variational distribution, predictive uncertainty in regression is estimated from multiple stochastic forward passes, with aleatoric uncertainty from noise and epistem