This document summarizes a neural approach to grammatical error correction that uses better pre-training and sequential transfer learning. It first discusses previous work on grammatical error correction (GEC) as a low-resource machine translation task and denoising autoencoders. It then describes the authors' approach, which includes context-aware preprocessing, pre-training a model on synthetically perturbed data to learn realistic error types, fine-tuning the model sequentially, and various postprocessing techniques. Evaluation results on the BEA 2019 shared task show that the authors' approach reduces the performance gap between the restricted and low-resource tracks, and it performs well on different error types.