An introduction to Squeeze Excitation Networks, the architecture which won the ImageNet Large Scale Visual Recognition Challenge 2017 (the last ImageNet challenge to be hosted). The key idea is to use "channel attention". Attention originally came from NLP but was adopted for CNNs. This work shows how just applying attention to the channels of a CNN can improve the performance of an architecture dramatically. Moreover, this module can be implemented in any existing CNN architecture with minimal computational overhead. It is thus a very simple but very important idea.