This document summarizes an approach for automatic video manipulation detection. It presents using two forensic filters - one based on discrete cosine transform coefficients and the other on video requantization errors. These filter outputs are then used to train convolutional neural networks to classify videos as original or tampered. The methodology involves extracting forensic-based features from videos using the two filters, then performing classification on the filter outputs using CNNs. Experimental studies evaluated the approach on three datasets containing original and manipulated videos, and compared the performance of the proposed approach.