The document discusses back propagation in neural networks. It begins with an introduction that explains back propagation is used to fine-tune weights in a neural network to minimize error. It then provides steps to solve a back propagation problem using an example neural network with two inputs, two outputs, and a single hidden layer. The steps include calculating outputs, errors, and updated weights using equations that propagate error backwards to adjust weights and reduce total error.