This document introduces Monte Carlo methods and their application to the game of Go. It discusses how Monte Carlo methods can be used to simulate games and approximate properties without a closed-form solution. It also describes the multi-armed bandit problem and the upper confidence bound algorithm for exploration-exploitation. Finally, it outlines how the UCT algorithm extends upper confidence bounds to tree search for move selection in Monte Carlo Go programs.