(1) The document discusses techniques for visualizing and understanding convolutional networks, including deconvolutional networks to project activations back to the input space and occlusion sensitivity analysis. (2) The approach involves using a deconvolutional network to map activations in intermediate layers back to the input pixel space to show what patterns cause activations. Training details of modifying AlexNet for dense connections are also described. (3) Visualizing features reveals their increasing invariance at higher layers, exaggeration of discriminative parts, and evolution over training. Visualization helped select better architectures and analyze occlusion sensitivity and correspondence.