This document discusses common patterns in Spark streaming jobs, including mapping data, aggregating using monoids, and storing results. It describes using monoids to abstract aggregation, allowing different implementations like Bloom filters. It also discusses using dependency injection to make storage pluggable for different environments. The talk suggests potential additions to Spark's API to directly support these patterns.