The document provides an introduction to agents and intelligent systems. It defines key concepts such as agents, environments, agent architectures, and rationality. An agent is anything that perceives and acts in an environment. Agent architectures include table-based, reactive, model-based, goal-based, and learning agents. Rational agents act to maximize their performance or utility based on their perceptions, while bounded rational agents are limited by their resources. Environments can be fully or partially observable, deterministic or stochastic, single-agent or multi-agent. The ideal is to build autonomous agents that can learn to achieve goals in dynamic environments.