This document compares the streaming frameworks Spark Streaming and Storm. Spark Streaming processes data in micro-batches of around 500ms, uses batch processing semantics, and guarantees exactly-once processing. Storm processes data record-by-record with sub-second latency but only guarantees at least once processing. Benchmarking showed that as the number of data producers increased, Storm's throughput and spout latency also increased, while Spark Streaming's throughput increased with larger batch sizes and higher data loads. The document proposes further testing different use cases and adding monitoring dashboards.