I will start by describing privacy risk in deep learning, in particular memorization of inputs during network training and attacks that expose these memorized inputs. I will then talk about methods to mitigate memorization, and increase privacy, introducing the concept of differential privacy as a measure for exposure risk, and the idea of using generative networks to create synthetic data to protect the original training data. Finally, I will remark on benefits in generalization that come as a side effect of privacy enhancing deep learning methods.