1. The document proposes using Number Theoretic Transforms (NTTs) to accelerate quantized convolutional neural networks. NTTs allow fast convolution algorithms by treating it as a circular convolution in a finite field. 2. It presents an implementation of quantized convolutions using Fermat Number Transforms (FNTs), a type of NTT. FNTs support fast FFT-like algorithms with only modular additions and shifts. 3. Benchmarks on a Raspberry Pi Zero show the FNT approach provides computational savings over a naïve convolution implementation for quantized neural networks. Future work includes SIMD optimizations and an FPGA implementation.