Agents are machines that perform build steps and can be viewed as perceiving their environment and acting on it. There are five basic types of agents: table driven agents that perform actions by looking up percepts in a table; simple reflex agents that perform actions based on high-level interpretations of percepts; model-based reflex agents that perform actions based on the current percept and stored internal state; goal-based agents that choose actions to achieve given or computed goals while tracking state; and utility-based agents that choose actions to maximize a utility function measuring success or happiness in a state.