This document summarizes using Akka streams to stream large database result sets to Amazon S3. The key points are: - Akka streams can handle streaming large amounts of data without overloading memory by processing data in chunks. - A stream consists of a source (database query), flow (serialization), and sink (S3 upload). - The stream serializes database rows into bytes and uploads them to S3 in parallel chunks using S3's multipart upload API to avoid timeouts. - Anorm provides an Akka stream source to query a database, and a custom S3 sink uploads chunks to S3 concurrently. Retries and error handling would be needed for production.