This document presents a hardware implementation of the quasi-newton method on FPGA for fast training of artificial neural networks (ANNs), emphasizing its efficiency over traditional CPU and GPU approaches for onsite applications. The proposed architecture supports batch-mode training and achieves performance improvement of up to 105 times compared to software implementations. It addresses challenges in hardware training flexibility, scalability, and power efficiency while demonstrating effective capabilities for various ANN sizes.