This document discusses hidden Markov models (HMMs). It begins by covering Markov chains and Markov models. It then defines HMMs, noting that they have hidden states that can produce observable outputs. The key components of an HMM - the transition probabilities, observation probabilities, and initial probabilities - are explained. Examples of HMMs for weather and marble jars are provided to illustrate calculating probabilities. The main issues in using HMMs are identified as evaluation, decoding, and learning. Evaluation calculates the probability of an observation sequence given a model. Decoding finds the most likely state sequence that produced an observation sequence. Learning determines model parameters to best fit training data.