This document presents a novel technique for synthesizing fixed-point code for trained neural networks, enabling their deployment in safety-critical applications with limited computational resources. The approach involves tuning the precision of neural networks to maintain accuracy while utilizing fixed-point arithmetic, which enhances computational efficiency on simpler CPUs. Experimental results demonstrate that the fixed-point neural networks maintain the performance characteristics of their floating-point counterparts within user-specified error thresholds.