The document presents the deep residual learning framework proposed in the paper "Deep Residual Learning for Image Recognition". The framework aims to make it easier to optimize extremely deep convolutional neural networks. It does this by introducing "skip connections" that allow layers to learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. This addresses issues like vanishing gradients in very deep networks. The authors demonstrate that residual networks are easier to optimize and can gain accuracy from increased depth, outperforming standard networks.