This document discusses Restricted Boltzmann Machines (RBMs), which are unsupervised deep learning algorithms that can learn probability distributions over inputs. RBMs have two layers of nodes - visible units that receive input data and hidden units that learn representations of the input. They are trained to reconstruct their input by adjusting weights between nodes through forward and backward passes. RBMs can be stacked to form Deep Belief Networks, where the hidden units of one RBM become the visible units of the next. This allows higher layers to learn more abstract representations useful for tasks like classification. Training involves adjusting weights to minimize reconstruction error between original and reconstructed inputs.