This document discusses Markov chains and related topics:
- It defines Markov chains as stochastic processes that take values in a countable set, with the memoryless property that future states only depend on the present state.
- It distinguishes between discrete-time and continuous-time Markov chains, and introduces transition probabilities and matrices.
- It covers Chapman-Kolmogorov equations for calculating multi-step transition probabilities recursively, and the concept of a stationary distribution when state probabilities converge over time.
- Examples discussed include transforming processes into Markov chains, a camera inventory model, and using Markov chains to model bond credit ratings.
- Special cases like two-state chains, independent random variables, random walks