This presents a case study of the way in which Computational Archival Science can be used to contribute to a novel approach towards detecting fake videos.
The paper associated with this presentation is titled "A Case Study on Leveraging Archival and Engineering Approaches to Develop a Framework to Detect and Prevent “Fake Video”.
The paper was published in the proceedings of IEEE Big Data 2019 Conference, in California. Video of the presentation and paper could be found on Youtube
Separation of Lanthanides/ Lanthanides and Actinides
Detecting Fake Videos: How Computational Archival Science can contribute to detecting fake videos
1. 1
Hoda Hamouda (hoda.hamouda@gmail.com), Victoria Lemieux (v.lemieux@ubc.ca)
Corinne Rogers, Ken Thibodeau, Jessica Bushey,
James Stewart, James Cameron, & Chen Feng
4th COMPUTATIONAL ARCHIVAL SCIENCE (CAS) WORKSHOP
Wednesday, Dec. 11, 2019, Los Angeles, CA
Extending the Scope of CAS:
A Case Study on Leveraging Archival and Engineering
Approaches to Develop a Framework to Detect and
Prevent “Fake Video”
2. 2
Extending the Scope of CAS:
A case study of the way in which Computational
Archival Science can be used to contribute to a novel
approach towards detecting fake videos
3. How to better detect and prevent fake videos?
Our team:
Practitioners, academics, and researchers from the disciplines of
archival science, digital forensics, computer science, and
engineering.
3
Hoda Hamouda, Victoria Lemieux,
Corinne Rogers, Ken Thibodeau,
Jessica Bushey, James Stewart, James Cameron, & Chen Feng
4. We propose extending current approaches used to detect fake videos, by
incorporating the perspective of archival diplomatics.
Archival diplomatics is the “integration of archival and diplomatic theory about the
genesis, inner constitution, and transmission of documents; and about their
relationship with the facts represented in them, …” (Duranti, 2013)
4
Our research
5. Outline
• (Fake) videos detection in previous research
• (Fake) videos from the perspective of archival diplomatics
• Typology to categorize fake videos
• Human test to validate the typology
• Tests to detect fake video
5
Our research
7. How to better detect and prevent fake videos?
Process: Analysis of twelve case studies of fake videos
7
8. Tended to focus on addressing the issue by analyzing the content of videos.
“Spoofing and countermeasures for speaker verification” Wu, et al. 2015;
“Presentation Attack Detection Methods for Face Recognition Systems”
Ramachandra et al. 2017
“Digital video tampering detection: An overview of passive techniques” Sitara
et al. 2016
8
Previous Research into detecting fake videos
9. Tended to focus on addressing the issue by analyzing the content of
videos.
Khodabakhsh, et al. 2018; Wu, et al. 2015;
Ramachandra et al. 2017; Sitara et al. 2016
In archival science the context of a record plays an important role in
protecting its authenticity
9
Previous Research into detecting fake videos
10. We have seen value in applying concepts from archival science,
specifically archival diplomatics and its analytical frameworks to
enhance existing approaches to detecting fake videos.
10
Previous Research into preventing fake videos
11. Research Plan
Plan:
1) generate a classification of fake videos to be able to name their
different types;
2) generate a model to detect different types of fake videos; and
3) prototype a solution to protect videos from being “faked” or
manipulated.
11
12. Outline
• (Fake) videos detection in previous research
• (Fake) videos from the perspective of archival diplomatics
• Typology to categorize fake videos
• Tests to detect fake video
• Human test to validate the typology
12
Our research
13. Trustworthiness in archival diplomatics and its
relation to videos
13
Trustworthiness
ReliabilityAccuracy Authenticity
Identity IntegrityCompleteness Control
over creation
procedure
Trustworthiness of a Record
18. Outline
• (Fake) videos detection in previous research
• (Fake) videos from the perspective of archival diplomatics
• Typology to categorize fake videos
• Tests to detect fake video
• Human test to validate the typology
18
Our research
19. Typology of “fakes”: Typology to categorize fake videos
First, in order to detect fake videos, we found that we needed to
work towards building a specification of untrustworthiness in
videos, and generalizing a typology of “fakes”.
19
20. Typology of “fakes”
Precedent work related to developing a taxonomy for different types of fake
videos.
Tandoc, et. al, on the typology of fake news (2018)
Khodabakhsh et al., on audiovisual fake content (i.e. fake videos) (2018)
Teyssou and Spangenberg on fake video content (2019).
20
21. Typology of “fakes”
Shortcomings of Previous Research related to developing a
taxonomy for different types of fake videos.
Focus on videos involving talking heads
Exclude some genres of videos (e.g. natural disasters, and protests)
Some relied upon inferring the intention of the author of the video
21
22. Typology of “fakes”
Addressing the Gap in Previous Work
Focus on videos that were edited, manipulated, fabricated, or
wherein information has been omitted and which result in the
video disseminating disinformation.
Include other genres of videos
Veer away from attempts to guess the intentions
22
23. Our Typology of “fakes”
Every video consists of three components:
Visual
Audio
Metadata (date, location, title, description)
23
24. Outline
• (Fake) videos detection in previous research
• (Fake) videos from the perspective of archival diplomatics
• Typology to categorize fake videos
• Our typology
• Tests to detect fake video
• Human test to validate the typology
24
Our research
25. Our Typology of “fakes”
Every video consists of three components:
Visual
Audio
Metadata (date, location, title, description)
Fake videos can be identified through
detection of inconsistencies in one or more components of a
video: the visual, audio, or metadata components of a video.
25
26. Outline
• (Fake) videos detection in previous research
• (Fake) videos from the perspective of archival diplomatics
• Typology to categorize fake videos
• Tests to detect fake video
• Human test to validate the typology
26
Our research
27. Our Typology of “fakes”
These inconsistencies can occur
1) among the components of one video, and/or
2) between the components of two videos, if a near-
duplicate video exists.
27
28. Our Typology of “fakes”
These inconsistencies can occur
1) among the components of one video, and/or
2) between the components of two videos, if a
near-duplicate video exists.
28
29. Our Typology of “fakes”
Fake videos can be identified through
detection of inconsistencies in one or more components of a
video: the visual, audio, or metadata components of a video.
These inconsistencies can occur
1) among the components of one video, and/or
2) between the components of two videos, if a near-duplicate video
exists.
29
30. Categories as Tests
We concluded that there are
six unique tests could be
used to detect a fake video.
30
40. Detecting
To verify a video, we propose to run tests in two rounds, each
consisting of two steps:
Round 1 is an internal consistency check which is a pairwise
comparison of the characteristics of each component
(visual, audio, metadata) within the same video
40
41. Tests to Detect Fake Videos
To verify a video, we propose to run tests in two rounds, each
consisting of two steps:
Round 2 is an external consistency check which is a
pairwise comparison of the characteristics of each
component between one instance of a video and another
instance of a near-duplicate video if one is available.
41
42. The goal will be to establish an alert that a human viewer receives indicating
that further analysis and investigation may be necessary.
42
Tests to Detect Fake Videos
43. Outline
• (Fake) videos detection in previous research
• (Fake) videos from the perspective of archival diplomatics
• Typology to categorize fake videos
• Tests to detect fake video
• Human test to validate the typology
43
Our research
44. Will the tests help people detect fake videos?
To measure: The effect of familiarizing participants with the 6
tests on their detection performance.
Experiment Design
44
45. Will the tests help people identify fake videos?
Experiment Design
45
Controlled group Intervention group
Introduced to types of inconsistencies
Watch 8 videos, 6 fake, 2 originals
Watch 8 videos, 6 fake, 2 originals
Classify which are fake / authentic
47. • Producing 14 fake videos
• To eliminate low-level clues that participants
might use to identify the fake videos, I created
Youtube interface
Experiment Design
47
48. • Producing 14 fake videos
• To eliminate low-level clues that participants
might use to identify the fake videos, I created
Youtube interface
• Eliminate order bias
Experiment Design
48
49. Future work
Our future work will focus on conducting a human evaluation of our framework
to determine whether application of the tests leads human classifiers to more
accurately predict whether a video is fake. Based on the results of our
evaluation, we will revise our approach and/or our tests to achieve
improved results. Once we have undertaken our revisions, we will then
design automated techniques to conduct the tests in the context of a
human-in-the-loop system that runs the tests as a flag to a human analyst of
the possibility that a particular video may be a fake.
49