This document summarizes and analyzes first-order meta-learning algorithms. It discusses MAML, which approximates the MAML objective using only first-order information (FOMAML). FOMAML is equivalent to applying the last gradient to the initial parameters. Reptile is also analyzed, which simply averages the parameter updates. In expectation, the gradients of MAML, FOMAML and Reptile depend on the average gradient and average inner product of gradients. Experiments show similar performance between FOMAML and Reptile. The analysis suggests SGD may generalize well due to being an approximation of MAML.