This document describes using Akka Streams and Kafka to log events to multiple storage systems. A consumer writes each event record to PostgreSQL for fast access and Hadoop for cheaper long-term storage. Akka Streams is used to reduce code and easily add new storages. Events are written to HDFS as Avro files and converted to ORC format with indexes. Both PostgreSQL and Hive have JDBC drivers. Base classes provide common parsing, storage, and Kafka consumer functionality that storage-specific flows extend.