This document summarizes key aspects of variational autoencoders (VAEs): VAE is a generative model that learns a latent representation of data. It approximates the intractable posterior using an encoder network and maximizes a variational lower bound. Semi-supervised VAE models can incorporate unlabeled data by learning shared representations. VAEs have been extended for recurrent sequences, convolutional structures, disentangled representations, and multi-modal data. Importance weighted autoencoders provide a tighter evidence lower bound than standard VAEs.