This document provides an overview of data pipelines and various technologies that can be used to build them. It begins with a brief history of pipelines and their origins in UNIX. It then discusses common pipeline concepts like decoupling of tasks, encapsulation of processing, and reuse of tasks. Several examples of graphical and programmatic pipeline solutions are presented, including Luigi, Piecepipe, Spring Batch, and workflow engines. Big data pipelines using Hadoop and technologies like Pig and Oozie are also covered. Finally, cloud-based pipeline technologies from AWS like Kinesis, Data Pipeline, Lambda, and EMR are described. Throughout the document, examples are provided to illustrate how different technologies can be used to specify and run data processing pipelines.