This document discusses Hadoop and its core components HDFS and MapReduce as a solution to big data problems. HDFS stores large data files across clusters of computers as blocks that can be processed in parallel by MapReduce. MapReduce allows distributed processing of large datasets by mapping input data to intermediate key-value pairs, shuffling and sorting the data, and reducing it to final results. Hadoop provides scalable and cost-effective solutions to challenges like volume, variety, and velocity of big data.