Project Matsu aimed to provide persistent data resources and elastic computing for disaster relief by making imagery available for processing using large-scale cloud computing. It evaluated three approaches: 1) Using Hadoop and MapReduce to split images and process parts in parallel; 2) Using Hadoop streaming with Python to preprocess images into a single file and process line-by-line; and 3) Using the Sector distributed file system to keep images together on nodes and applying user-defined functions to process images without splitting. The goal was to enable change detection on images from different times to assist relief workers.