Hadoop exploits data locality by attempting to schedule map tasks on nodes where the input data is already stored locally on disk. This avoids the costly operation of transferring large amounts of data across the network. When slave nodes request new map tasks, the master node prioritizes tasks whose data is local to that slave. If no such local tasks are available, Hadoop tries to schedule tasks where data is on the same rack to achieve rack-level locality. By keeping computation and data close together, Hadoop is able to process large datasets very efficiently in distributed environments.