This document summarizes a presentation on implementing an ETL project using Apache Impala on the Hadoop platform. It discusses the components of a Hadoop ETL project, including data storage, sources, ETL processing, metadata, and targets. It then shares details about a customer case that involved replacing an SQL reporting system with Impala to access standardized data for reporting and analytics within required timeframes. The presentation emphasizes that successful Hadoop ETL projects require data engineering methodologies around organization, modularization, logging, and testing approaches.