This document discusses search problems and planning agents. It begins by defining reflex agents that choose actions based only on the current state, and planning agents that consider future consequences of actions. Planning agents must have a model of how the world changes in response to actions. The document then discusses representing search problems using state spaces, successor functions, start/goal states. It provides examples of path finding and dot eating problems represented as search problems. Finally, it discusses using search trees and graphs to systematically explore possible plans through state spaces to solve problems.