The back propagation method is a key technique for training artificial neural networks, involving a cycle of propagation and weight updates. It calculates partial derivatives of a cost function with respect to weights and biases by using a quadratic cost function and averaging over training examples. The algorithm relies on linear algebra operations, particularly the Hadamard product for elementwise vector multiplication.