1. Bellman's optimality criterion states that the optimal policy for the remaining decisions must constitute an optimal policy starting from the state resulting from the first decision. 2. Bellman's optimality equation extends dynamic programming to infinite horizon problems by defining the cost at each time step as a discounted value and formulating the optimal cost as the minimum expected immediate cost plus the discounted expected future cost. 3. An example of a dice game Markov decision process is presented to illustrate calculating expected rewards and transition probabilities to determine the optimal policy using Bellman's optimality equation.