This document discusses approaches to training deep neural networks to be robust against adversarial examples. It frames adversarial robustness as a minimax game between the network and an attacker. It presents projected gradient descent (PGD) and the Fast Gradient Sign Method (FGSM) as ways to solve the inner maximization problem during training. Experiments show that adversarially trained models can achieve increased robustness compared to standard networks.