This document summarizes a presentation on building a streaming Lakehouse with Apache Flink and Apache Hudi. The presentation introduces Hudi as a way to unify batch and streaming workloads in a centralized data lake platform. It discusses how Hudi enables features like efficient upserts/deletes, incremental processing for change streams, and automatic catalog synchronization. The presentation demonstrates using Flink and Hudi on Amazon EMR and outlines several ongoing Hudi projects, including a new metaserver and lake cache, to further optimize query performance and metadata handling for streaming data lakes.