This document discusses using distributed processing to increase computational power. It introduces the concepts of using a distributed file system like HDFS to store large files across multiple machines. MapReduce is presented as a way to process large amounts of data in parallel by splitting it into chunks, mapping functions to each chunk, and then recombining the results. Hadoop is introduced as an open-source MapReduce framework that uses HDFS for storage and allows processing of data across a cluster of machines. Examples are given of writing Map and Reduce functions and running jobs on Hadoop to demonstrate distributed processing of large datasets.