In recent years, Deep Neural Networks (DNNs) have rapidly advanced and reached a sufficiently mature state to be adopted in real-world applications. Especially, as DNNs solve difficult problems in computer vision (e.g., image classification and object detection), community has started exploring the use of DNNs in real-time or nearly real-time vision applications. Video content analysis (VCA) is one such application that often utilizes DNNs as its core engine and offers plentiful capabilities for a wide range of domains including safety and security, flame and smoke detection, automotive, health-care, home automation, and retail. Apache Spark Streaming has been the de-facto standard platform where real-time Big data applications such as VCA are run at hyper-scale. While the integration of DNNs and real-time vision applications promise ample opportunities for Spark Streaming community, the massive compute demand to accommodate (1) the ever-increasing DNN model size, and (2) the growing scale of data (e.g., billions of high-resolution video data) significantly limits its practicality. In this work, we seek to address this challenge and provide a solution to meet this gigantic compute demand by leveraging FPGA acceleration. We develop SparkWeaver, a full-stack solution that, from DNN-based real-time vision applications (e.g., VCA), automatically offloads the heavy DNN computations to our FPGA accelerators without the developers' intervention. We use FPGAs as our DNN acceleration platform since they not only offer low inference-latency and high power-efficiency, oftentimes required for real-time vision applications, but also provide a programmable substrate for acceleration of non-DNN components of the applications. To demonstrate the easy use of the solution, we will do a live-demo that shows the SparkWeaver's automated workflow that takes a DNN-based VCA application written using Spark Streaming APIs and runs the VCA application on a Spark cluster, while offloading DNN computations to FPGAs, without imposing additional manual efforts on the developers.