This document discusses using Spark to model catastrophic events more efficiently than MapReduce. It describes how catastrophe (cat) models involve large datasets but generate even larger intermediate datasets requiring complex analytics. Spark is better suited than MapReduce for this work due to its ability to share memory and resources across processes, providing faster performance at lower costs. The document advocates designing cat model workflows in Spark to take advantage of its flexible architecture and high code quality.