MapReduce is a framework for processing large datasets in a distributed manner. It involves two functions: map and reduce. The map function processes individual elements to generate intermediate key-value pairs, and the reduce function merges all intermediate values with the same key. Hadoop is an open-source implementation of MapReduce that uses HDFS for storage. A typical MapReduce job in Hadoop involves defining map and reduce functions, configuring the job, and submitting it to the JobTracker which schedules tasks across nodes and monitors execution.