This document provides an overview of a hands-on workshop on running Hadoop on Amazon Elastic MapReduce. It discusses setting up an AWS account, signing up for necessary services like S3 and EC2, creating an S3 bucket, generating access keys, creating a new EMR job flow, and viewing results from the S3 bucket. It also covers installing and running Hadoop locally, importing and reviewing data in HDFS, and the MapReduce programming model.