Craig Chambers discusses his past work on programming languages and compilers and his current work on Flume, a data-parallel programming system. Flume aims to make data-parallel programming easy by providing high-level abstractions while automatically optimizing and executing pipelines. Flume builds an execution graph from data-parallel operations and optimizes it into MapReduce jobs before execution. Early experience shows Flume is easier for users than raw MapReduce and the optimizer helps improve performance. Future work includes expanding Flume to more execution substrates and auto-tuning pipelines.