The document introduces Daniel Templeton and Inyoung Cho, who will be hosting a hands-on Hadoop lab. They define big data as any data that is difficult to store in a traditional database due to size, changing schemas, or being unstructured. The lab will provide overviews of the core Hadoop components - HDFS is a distributed file system that chunks and replicates files across nodes, MapReduce provides parallel processing in two phases of mapping and reducing, Hive allows SQL queries on Hadoop data by translating queries to MapReduce jobs, Impala improves on Hive by removing the MapReduce layer, Pig provides a scripting language that is also translated to MapReduce jobs. The hands-on lab is self-paced and