This document summarizes recent research on deep learning with low precision numerical representations. It discusses 6 different methods for reducing precision: 16-bit fixed-point, dynamic fixed-point, 8-bit approximate representation, BinaryConnect, BinaryNet, and XNOR-Nets. For each method, it provides an abstract, the key techniques used, experimental results on datasets like MNIST and CIFAR-10, and a discussion of findings. The overall goal of these methods is to reduce computational and memory costs while maintaining high recognition performance in neural networks.