Apache Beam is a unified programming model for expressing efficient and portable data processing pipelines. It provides a programming model and set of language-specific SDKs for building pipelines that can run on multiple distributed processing backends. Beam pipelines consist of transformations and collections that allow expressing data processing workflows in a way that is portable across different execution environments like Apache Spark, Apache Flink, and Google Cloud Dataflow. Examples of pipelines covered include word counting from text, trending hashtags on Twitter, and billing estimates.