This document provides an overview of recurrent neural networks and their applications. It discusses how RNNs can remember previous inputs through feedback loops and internal states. Long short-term memory networks are presented as an improvement over standard RNNs in dealing with long-term dependencies. The document also introduces word embeddings to map words to vectors, and transformers which provide an alternative to RNNs using self-attention. Code examples of RNNs in TensorFlow 2.0 are also shown.