Slides accompanying an online webinar on DeepFake Detection and a hands-on demonstration of the MeVer DeepFake Detection service. The webinar is supported by the US-Paris Tech Challenge award for our work on the InVID-WeVerify plugin.
Although manipulations of visual and auditory media are as old as the media themselves, the recent entrance of deepfakes has marked a turning point in the creation of fake content. Powered by latest technological advances in AI and machine learning, they offer automated procedures to create fake content that is harder and harder to detect to human observers. The possibilities to deceive are endless, including manipulated pictures, videos and audio, that will have large societal impact. Because of this, organizations need to understand the inner workings of the underlying techniques, as well as their strengths and limitations. This article provides a working definition of deepfakes together with an overview of the underlying technology. We classify different deepfake types: photo (face- and body-swapping), audio (voice-swapping, text to speech), video (face-swapping, face-morphing, full body puppetry) and audio & video (lip-synching), and identify risks and opportunities to help organizations think about the future of deepfakes. Finally, we propose the R.E.A.L. framework to manage deepfake risks: Record original content to assure deniability, Expose deepfakes early, Advocate for legal protection and Leverage trust to counter credulity. Following these principles, we hope that our society can be more prepared to counter the deepfake tricks as we appreciate its treats.
Deepfakes - How they work and what it means for the futureJarrod Overson
Deepfakes originally started as cheap costing but believable video effects and have expanded into AI-generated content of every format. This session dove into the state of deepfakes and how the technology highlights an exciting but dangerous future.
SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)SSII
SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)
6/10 (木) 14:30~15:00
講師:Huy H. Nguyen 氏(総合研究大学院大学/国立情報学研究所)
概要: Advances in machine learning and their interference with computer graphics allow us to easily generate high-quality images and videos. State-of-the-art manipulation methods enable the real-time manipulation of videos obtained from social networks. It is also possible to generate videos from a single portrait image. By combining these methods with speech synthesis, attackers can create a realistic video of some person saying something that they never said and distribute it on the internet. This results in loosing social trust, making confusion, and harming people’s reputation. Several countermeasures have been proposed to tackle this problem, from using hand-crafted features to using convolutional neural network. Some countermeasures use images as input and other leverage temporal information in videos. Their output could be binary (bona fide or fake) or muti-class (deepfake detection), or segmentation masks (manipulation localization). Since deepfake methods evolve rapidly, dealing with unseen ones is still a challenging problem. Some solutions have been proposed, however, this problem is not completely solved. In this talk, I will provide an overview on both deepfake generation and deepfake detection/localization. I will mainly focus on image and video domain and also introduce some audiovisual-based methods on both sides. Some open discussions and future directions are also included.
Although manipulations of visual and auditory media are as old as the media themselves, the recent entrance of deepfakes has marked a turning point in the creation of fake content. Powered by latest technological advances in AI and machine learning, they offer automated procedures to create fake content that is harder and harder to detect to human observers. The possibilities to deceive are endless, including manipulated pictures, videos and audio, that will have large societal impact. Because of this, organizations need to understand the inner workings of the underlying techniques, as well as their strengths and limitations. This article provides a working definition of deepfakes together with an overview of the underlying technology. We classify different deepfake types: photo (face- and body-swapping), audio (voice-swapping, text to speech), video (face-swapping, face-morphing, full body puppetry) and audio & video (lip-synching), and identify risks and opportunities to help organizations think about the future of deepfakes. Finally, we propose the R.E.A.L. framework to manage deepfake risks: Record original content to assure deniability, Expose deepfakes early, Advocate for legal protection and Leverage trust to counter credulity. Following these principles, we hope that our society can be more prepared to counter the deepfake tricks as we appreciate its treats.
Deepfakes - How they work and what it means for the futureJarrod Overson
Deepfakes originally started as cheap costing but believable video effects and have expanded into AI-generated content of every format. This session dove into the state of deepfakes and how the technology highlights an exciting but dangerous future.
SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)SSII
SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)
6/10 (木) 14:30~15:00
講師:Huy H. Nguyen 氏(総合研究大学院大学/国立情報学研究所)
概要: Advances in machine learning and their interference with computer graphics allow us to easily generate high-quality images and videos. State-of-the-art manipulation methods enable the real-time manipulation of videos obtained from social networks. It is also possible to generate videos from a single portrait image. By combining these methods with speech synthesis, attackers can create a realistic video of some person saying something that they never said and distribute it on the internet. This results in loosing social trust, making confusion, and harming people’s reputation. Several countermeasures have been proposed to tackle this problem, from using hand-crafted features to using convolutional neural network. Some countermeasures use images as input and other leverage temporal information in videos. Their output could be binary (bona fide or fake) or muti-class (deepfake detection), or segmentation masks (manipulation localization). Since deepfake methods evolve rapidly, dealing with unseen ones is still a challenging problem. Some solutions have been proposed, however, this problem is not completely solved. In this talk, I will provide an overview on both deepfake generation and deepfake detection/localization. I will mainly focus on image and video domain and also introduce some audiovisual-based methods on both sides. Some open discussions and future directions are also included.
Deepfakes are a form of synthetic media that use artificial intelligence and machine learning algorithms to create fake images, videos, or audio recordings that appear to be real. They are created by manipulating or combining existing content to produce a realistic result.
The “deepfake” phenomenon — using machine learning to generate synthetic video, audio and text content — is an ominous example of how quickly new technologies can be diverted from their original purposes. Month by month, it is becoming easier and cheaper to create fakes that are increasingly difficult to distinguish from genuine artefacts.
deepfake
seminar
computer engineering
ppt on deepfake which uses ai and deep learning technology.with adavantages,disadvantages,intro,reference,conclusion
DEEPFAKE DETECTION TECHNIQUES: A REVIEWvivatechijri
Noteworthy advancements in the field of deep learning have led to the rise of highly realistic AI generated fake videos, these videos are commonly known as Deepfakes. They refer to manipulated videos, that are generated by sophisticated AI, that yield formed videos and tones that seem to be original. Although this technology has numerous beneficial applications, there are also significant concerns about the disadvantages of the same. So there is a need to develop a system that would detect and mitigate the negative impact of these AI generated videos on society. The videos that get transferred through social media are of low quality, so the detection of such videos becomes difficult. Many researchers in the past have done analysis on Deepfake detection which were based on Machine Learning, Support Vector Machine and Deep Learning based techniques such as Convolution Neural Network with or without LSTM .This paper analyses various techniques that are used by several researchers to detect Deepfake videos.
Deepfake detection is a critical and evolving field aimed at identifying and mitigating the risks associated with manipulated multimedia content created using artificial intelligence (AI) techniques. Deepfakes involve the use of advanced machine learning algorithms, particularly generative models like Generative Adversarial Networks (GANs), to create highly convincing fake videos, audio recordings, or images that can deceive viewers into believing they are genuine.
One prevalent approach to deepfake detection involves leveraging advancements in computer vision and pattern recognition. Researchers and developers employ sophisticated algorithms to analyze various visual and auditory cues that may indicate the presence of deepfake manipulation. For instance, anomalies in facial expressions, inconsistent lighting and shadows, or unnatural lip sync in videos can be indicative of deepfake content. Additionally, deepfake detectors may examine metadata, such as inconsistencies in timestamps or editing artifacts, to identify alterations in the content's authenticity.
Machine learning plays a central role in deepfake detection, with models being trained on diverse datasets that include both authentic and manipulated content. Supervised learning techniques involve training models on labeled datasets, enabling them to recognize patterns associated with deepfake manipulation. Researchers also explore unsupervised and semi-supervised learning methods, allowing detectors to identify anomalies without explicit labels for every training instance.
As the field progresses, deepfake detectors are increasingly adopting advanced neural network architectures to enhance their accuracy. Ensembling multiple models, each specialized in detecting specific types of manipulations, is another strategy employed to improve overall detection performance. Furthermore, the integration of explainable AI techniques enables better understanding of the detection process and provides insights into the features contributing to the decision-making process of the models.
Despite these advancements, deepfake detection remains a challenging task due to the constant evolution of deepfake generation techniques. Adversarial training, where detectors are trained on data that includes adversarial examples, is one method to improve robustness against sophisticated manipulation attempts. Continuous research efforts are required to stay ahead of emerging deepfake technologies and to develop detectors capable of identifying novel manipulation methods.
In conclusion, deepfake detection is a multidimensional challenge that requires a combination of computer vision, machine learning, and data analysis techniques. Researchers and practitioners are actively developing and refining methods to detect manipulated content by examining visual and auditory cues, leveraging machine learning models, and staying vigilant against evolving deepfake technologies. As the threat landscape evolves, ongoing innovati
The "Big Data Analytics and its Use by Apple" presentation provides an overview of how Apple harnesses big data analytics to gain insights, drive innovation, and enhance business performance. It explores Apple's strategic use of data analytics in areas such as product development, customer experience, and operational efficiency, showcasing the value of data-driven decision-making in one of the world's leading technology companies.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/sep-2019-alliance-vitf-ucberkeley
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Shruti Agarwal, Ph.D. Candidate at U.C. Berkeley, delivers the presentation "Creating, Weaponizing,and Detecting Deep Fakes" at the Embedded Vision Alliance's September 2019 Vision Industry and Technology Forum. Agarwal explains how to use computer vision to detect "deepfakes."
5.1 Faster CNN
Faster RCNN is an efficient tool for detecting objects in 2D color images. The model was first proposed in TPAMI 2016, and is an improvement over previous works of RCNN and Fast RCNN, by introducing deep regional proposal networks.
CNN is specifically concerned with processing an input image. They have the specification in that. A convolutional neural network, or CNN, is a deep learning neural network designed for processing structured arrays of data such as images. Convolutional neural networks are widely used in computer vision and have become the state of the art for many visual applications such as image classification.
5.2 Tenserflow and Keras
A. TensorFlow
TensorFlow, an interface for expressing machine learning algorithms. Tenserflow is utilized for implementing ML systems into fabrication over a bunch of areas of computer science, including sentiment analysis, voice recognition, computer vision, text summarization, and flaw detection to pursue research . In the proposed model, the whole Sequential CNN architecture (consists of several layers) uses TensorFlow at backend. It is also used to reshape the data (image) in the data processing.
B. Keras
Keras gives fundamental reflections and building units for creation and transportation of ML arrangements with high iteration velocity. It takes full advantage of the scalability and cross-platform capabilities of TensorFlow. The core data structures of Keras are layers and models [19]. All the layers used in the CNN model are implemented using Keras. Along with the conversion of the class vector to the binary class matrix in data processing, it helps to compile the overall model.
5.3 OPEN-CV-:
When we are going to detect a face regarding with or without mask, we are basically going to perform image processing here. It is basically concerned with loading an image, converting that image into array and then performing the required operation. To perform all steps of image processing OPEN-CV is used. open-cv stands for Open Source Computer Vision Library. It has more than 2500 optimized algorithms.
The main use of open-cv is that it will help us to load images and convert them into arrays.This open-cv divides an image into number of rows and columns as the image resolution. The smallest element in an image is called pixel. Since a computer understand only numbers, every pixel is represented by three numbers, corresponding to the amounts of Red, Green ,and Blue.
We have used OPEN-CV to do real time face detection from a livestream via our webcam. We will be using Haar Cascade or Voila Jones algorithm to detect faces. It is basically a machine learning object detection algorithm which is used to identify objects in an image or vedio.
It basically deals with haar features, Haar features nothing but some common features among all human faces. Like the eye region is darker than the upper cheeks, the nose bridge is darker than the eyes and so on.
Face Detection and Recognition System (FDRS) is a physical characteristics recognition technology, using the inherent physiological features of humans for ID recognition. The technology does not need to be carried about and will not be lost, so it is convenient and safe for use
The Rise of Deep Fake Technology: A Comprehensive Guidefindeverything
In this guide, we go through into the emergence of deep fake technology, an innovative artificial intelligence (AI) technique that utilizes complex deep learning algorithms to fabricate manipulated videos or images with a realistic appearance. While this cutting-edge technology has the potential to revolution the entertainment and marketing industries, it also poses a significant threat to national security, individual privacy, and the truth of information. Our comprehensive analysis explores the difficulties of deep fake technology, its diverse applications, the potential benefits and drawbacks, and its profound impact on various industries.
Deepfakes: An Emerging Internet Threat and their DetectionSymeon Papadopoulos
Webinar talk in the context of the AI4EU Web Cafe. Recording of the talk available on: https://youtu.be/wY1rvseH1C8
Deepfakes have emerged for some time now as one of the largest Internet threats, and even though their primary use so far has been the creation of pornographic content, the risk of them being abused for disinformation purposes is growing by the day. Deepfake creation approaches and tools are continuously improving in terms of result quality and ease of use by non-experts, and accordingly the amount of deepfake content on the Internet is quickly growing. For that reason, approaches for deepfake detection are a valuable tool for media companies, social media platforms and ultimately citizens to help them tell authentic from deepfake generated content. In this presentation, I will be presenting a short overview of the developments in the field of deepfake detection, and present our lessons learned from working on the problem in the context of the Deepfake Detection Challenge and from developing a service for the H2020 WeVerify project.
Deepfakes are a form of synthetic media that use artificial intelligence and machine learning algorithms to create fake images, videos, or audio recordings that appear to be real. They are created by manipulating or combining existing content to produce a realistic result.
The “deepfake” phenomenon — using machine learning to generate synthetic video, audio and text content — is an ominous example of how quickly new technologies can be diverted from their original purposes. Month by month, it is becoming easier and cheaper to create fakes that are increasingly difficult to distinguish from genuine artefacts.
deepfake
seminar
computer engineering
ppt on deepfake which uses ai and deep learning technology.with adavantages,disadvantages,intro,reference,conclusion
DEEPFAKE DETECTION TECHNIQUES: A REVIEWvivatechijri
Noteworthy advancements in the field of deep learning have led to the rise of highly realistic AI generated fake videos, these videos are commonly known as Deepfakes. They refer to manipulated videos, that are generated by sophisticated AI, that yield formed videos and tones that seem to be original. Although this technology has numerous beneficial applications, there are also significant concerns about the disadvantages of the same. So there is a need to develop a system that would detect and mitigate the negative impact of these AI generated videos on society. The videos that get transferred through social media are of low quality, so the detection of such videos becomes difficult. Many researchers in the past have done analysis on Deepfake detection which were based on Machine Learning, Support Vector Machine and Deep Learning based techniques such as Convolution Neural Network with or without LSTM .This paper analyses various techniques that are used by several researchers to detect Deepfake videos.
Deepfake detection is a critical and evolving field aimed at identifying and mitigating the risks associated with manipulated multimedia content created using artificial intelligence (AI) techniques. Deepfakes involve the use of advanced machine learning algorithms, particularly generative models like Generative Adversarial Networks (GANs), to create highly convincing fake videos, audio recordings, or images that can deceive viewers into believing they are genuine.
One prevalent approach to deepfake detection involves leveraging advancements in computer vision and pattern recognition. Researchers and developers employ sophisticated algorithms to analyze various visual and auditory cues that may indicate the presence of deepfake manipulation. For instance, anomalies in facial expressions, inconsistent lighting and shadows, or unnatural lip sync in videos can be indicative of deepfake content. Additionally, deepfake detectors may examine metadata, such as inconsistencies in timestamps or editing artifacts, to identify alterations in the content's authenticity.
Machine learning plays a central role in deepfake detection, with models being trained on diverse datasets that include both authentic and manipulated content. Supervised learning techniques involve training models on labeled datasets, enabling them to recognize patterns associated with deepfake manipulation. Researchers also explore unsupervised and semi-supervised learning methods, allowing detectors to identify anomalies without explicit labels for every training instance.
As the field progresses, deepfake detectors are increasingly adopting advanced neural network architectures to enhance their accuracy. Ensembling multiple models, each specialized in detecting specific types of manipulations, is another strategy employed to improve overall detection performance. Furthermore, the integration of explainable AI techniques enables better understanding of the detection process and provides insights into the features contributing to the decision-making process of the models.
Despite these advancements, deepfake detection remains a challenging task due to the constant evolution of deepfake generation techniques. Adversarial training, where detectors are trained on data that includes adversarial examples, is one method to improve robustness against sophisticated manipulation attempts. Continuous research efforts are required to stay ahead of emerging deepfake technologies and to develop detectors capable of identifying novel manipulation methods.
In conclusion, deepfake detection is a multidimensional challenge that requires a combination of computer vision, machine learning, and data analysis techniques. Researchers and practitioners are actively developing and refining methods to detect manipulated content by examining visual and auditory cues, leveraging machine learning models, and staying vigilant against evolving deepfake technologies. As the threat landscape evolves, ongoing innovati
The "Big Data Analytics and its Use by Apple" presentation provides an overview of how Apple harnesses big data analytics to gain insights, drive innovation, and enhance business performance. It explores Apple's strategic use of data analytics in areas such as product development, customer experience, and operational efficiency, showcasing the value of data-driven decision-making in one of the world's leading technology companies.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/sep-2019-alliance-vitf-ucberkeley
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Shruti Agarwal, Ph.D. Candidate at U.C. Berkeley, delivers the presentation "Creating, Weaponizing,and Detecting Deep Fakes" at the Embedded Vision Alliance's September 2019 Vision Industry and Technology Forum. Agarwal explains how to use computer vision to detect "deepfakes."
5.1 Faster CNN
Faster RCNN is an efficient tool for detecting objects in 2D color images. The model was first proposed in TPAMI 2016, and is an improvement over previous works of RCNN and Fast RCNN, by introducing deep regional proposal networks.
CNN is specifically concerned with processing an input image. They have the specification in that. A convolutional neural network, or CNN, is a deep learning neural network designed for processing structured arrays of data such as images. Convolutional neural networks are widely used in computer vision and have become the state of the art for many visual applications such as image classification.
5.2 Tenserflow and Keras
A. TensorFlow
TensorFlow, an interface for expressing machine learning algorithms. Tenserflow is utilized for implementing ML systems into fabrication over a bunch of areas of computer science, including sentiment analysis, voice recognition, computer vision, text summarization, and flaw detection to pursue research . In the proposed model, the whole Sequential CNN architecture (consists of several layers) uses TensorFlow at backend. It is also used to reshape the data (image) in the data processing.
B. Keras
Keras gives fundamental reflections and building units for creation and transportation of ML arrangements with high iteration velocity. It takes full advantage of the scalability and cross-platform capabilities of TensorFlow. The core data structures of Keras are layers and models [19]. All the layers used in the CNN model are implemented using Keras. Along with the conversion of the class vector to the binary class matrix in data processing, it helps to compile the overall model.
5.3 OPEN-CV-:
When we are going to detect a face regarding with or without mask, we are basically going to perform image processing here. It is basically concerned with loading an image, converting that image into array and then performing the required operation. To perform all steps of image processing OPEN-CV is used. open-cv stands for Open Source Computer Vision Library. It has more than 2500 optimized algorithms.
The main use of open-cv is that it will help us to load images and convert them into arrays.This open-cv divides an image into number of rows and columns as the image resolution. The smallest element in an image is called pixel. Since a computer understand only numbers, every pixel is represented by three numbers, corresponding to the amounts of Red, Green ,and Blue.
We have used OPEN-CV to do real time face detection from a livestream via our webcam. We will be using Haar Cascade or Voila Jones algorithm to detect faces. It is basically a machine learning object detection algorithm which is used to identify objects in an image or vedio.
It basically deals with haar features, Haar features nothing but some common features among all human faces. Like the eye region is darker than the upper cheeks, the nose bridge is darker than the eyes and so on.
Face Detection and Recognition System (FDRS) is a physical characteristics recognition technology, using the inherent physiological features of humans for ID recognition. The technology does not need to be carried about and will not be lost, so it is convenient and safe for use
The Rise of Deep Fake Technology: A Comprehensive Guidefindeverything
In this guide, we go through into the emergence of deep fake technology, an innovative artificial intelligence (AI) technique that utilizes complex deep learning algorithms to fabricate manipulated videos or images with a realistic appearance. While this cutting-edge technology has the potential to revolution the entertainment and marketing industries, it also poses a significant threat to national security, individual privacy, and the truth of information. Our comprehensive analysis explores the difficulties of deep fake technology, its diverse applications, the potential benefits and drawbacks, and its profound impact on various industries.
Deepfakes: An Emerging Internet Threat and their DetectionSymeon Papadopoulos
Webinar talk in the context of the AI4EU Web Cafe. Recording of the talk available on: https://youtu.be/wY1rvseH1C8
Deepfakes have emerged for some time now as one of the largest Internet threats, and even though their primary use so far has been the creation of pornographic content, the risk of them being abused for disinformation purposes is growing by the day. Deepfake creation approaches and tools are continuously improving in terms of result quality and ease of use by non-experts, and accordingly the amount of deepfake content on the Internet is quickly growing. For that reason, approaches for deepfake detection are a valuable tool for media companies, social media platforms and ultimately citizens to help them tell authentic from deepfake generated content. In this presentation, I will be presenting a short overview of the developments in the field of deepfake detection, and present our lessons learned from working on the problem in the context of the Deepfake Detection Challenge and from developing a service for the H2020 WeVerify project.
Deepfake Detection: The Importance of Training Data Preprocessing and Practic...Symeon Papadopoulos
Talk on the AI4Media Workshop on GANs for Media Content Generation, October 1st 2020, https://ai4media.eu/events/gan-media-generation-workshop-oct-2020/
Unmasking deepfakes: A systematic review of deepfake detection and generation...Araz Taeihagh
Due to the fast spread of data through digital media, individuals and societies must assess the reliability of information. Deepfakes are not a novel idea but they are now a widespread phenomenon. The impact of deepfakes and disinformation can range from infuriating individuals to affecting and misleading entire societies and even nations. There are several ways to detect and generate deepfakes online. By conducting a systematic literature analysis, in this study we explore automatic key detection and generation methods, frameworks, algorithms, and tools for identifying deepfakes (audio, images, and videos), and how these approaches can be employed within different situations to counter the spread of deepfakes and the generation of disinformation. Moreover, we explore state-of-the-art frameworks related to deepfakes to understand how emerging machine learning and deep learning approaches affect online disinformation. We also highlight practical challenges and trends in implementing policies to counter deepfakes. Finally, we provide policy recommendations based on analyzing how emerging artificial intelligence (AI) techniques can be employed to detect and generate deepfakes online. This study benefits the community and readers by providing a better understanding of recent developments in deepfake detection and generation frameworks. The study also sheds a light on the potential of AI in relation to deepfakes.
A survey of deepfakes in terms of deep learning and multimedia forensicsIJECEIAES
Artificial intelligence techniques are reaching us in several forms, some of which are useful but can be exploited in a way that harms us. One of these forms is called deepfakes. Deepfakes is used to completely modify video (or image) content to display something that was not in it originally. The danger of deepfake technology impact on society through the loss of confidence in everything is published. Therefore, in this paper, we focus on deepfake detection technology from the view of two concepts which are deep learning and forensic tools. The purpose of this survey is to give the reader a deeper overview of i) the environment of deepfake creation and detection, ii) how deep learning and forensic tools contributed to the detection of deepfakes, and iii) finally how in the future incorporating both deep learning technology and tools for forensics can increase the efficiency of deepfakes detection.
This Powerpoint prsentation contains information about the overview of various successful works performed for Biometric Recognition using Deep Learning. This work is based on an existing survey paper.
[DSC Europe 22] Face Spoofing Detection: Theory and Practice - Pavle MilosevicDataScienceConferenc1
I aim to cover both theoretical and practical point of view on face spoofing problem. I am going to speak about very first methods for tackling this problem, as well as the modern approaches commonly used in practice.
Face Recognition Based on Deep Learning (Yurii Pashchenko Technology Stream) IT Arena
Lviv IT Arena is a conference specially designed for programmers, designers, developers, top managers, inverstors, entrepreneur and startuppers. Annually it takes place on 2-4 of October in Lviv at the Arena Lviv stadium. In 2015 conference gathered more than 1400 participants and over 100 speakers from companies like Facebook. FitBit, Mail.ru, HP, Epson and IBM. More details about conference at itarene.lviv.ua.
https://telecombcn-dl.github.io/2017-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
Deepfakes refer to synthetic media created using advanced AI and ML techniques. What are its potential applications and implications for society at large?
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Similar to DeepFake Detection: Challenges, Progress and Hands-on Demonstration of Technology (20)
Short panel presentation given in the context of the AI4EU WebCafe "The COVID-19 and Contact Tracing Apps" on June 23rd 2020, focusing on the problem of COVID-19 misinformation and how this could potentially affect the adoption of contact tracing apps.
Lecture given on January 28, 2019 to post-graduate students of the Computer Engineering and Media program, at the School of Journalism and Media, Aristotle University of Thessaloniki.
Presentation on the topic of sensing air-quality at city level based on Twitter data given at the IEEE Image, Video, and Multidimensional Signal Processing (IVMSP) 2018 workshop in Aristi, Greece.
Aggregating and Analyzing the Context of Social Media ContentSymeon Papadopoulos
Introduction to the Context Analysis and Aggregation service of InVID. Given at the Workshop on Content Verification Tools hosted by the journalists' association in Thessaloniki, Greece on June 6, 2018.
Summary of problems and research results on the problem of verifying multimedia content on the Internet. Includes results from the REVEAL and InVID research projects. Presented at the Technology Forum, Thessaloniki, May 16, 2018.
Presentation of web-based service developed within REVEAL and InVID on Experts’ Meeting on Digital Image Authentication and Classification, December 6, 2017.
Tutorial for ACM Multimedia 2016, given together with Gerald Friedland, with contributions from Julia Bernd and Yiannis Kompatsiaris. The presentation covered an introduction to the problem of disclosing personal information through multimedia sharing, the associated security risks, methods for conducting multimodla inferences and technical frameworks that could help alleviate such risks.
Presentation of the joint participation between CERTH and CEA LIST in the MediaEval 2015 edition of the Retrieving Diverse Social Images Task in Wurzen, Germany on 14-15 September, 2015.
Presentation of the joint participation between CERTH and CEA LIST in the 2015 edition of the MediaEval Placing Task in Wurzen, Germany, September 14-15, 2015.
Presentation of the task overview in MediaEval 2015, Wurzen, Germany. Verifying Multimedia Use is about detecting tweets that carry misleading information and content.
Welcome to the first live UiPath Community Day Dubai! Join us for this unique occasion to meet our local and global UiPath Community and leaders. You will get a full view of the MEA region's automation landscape and the AI Powered automation technology capabilities of UiPath. Also, hosted by our local partners Marc Ellis, you will enjoy a half-day packed with industry insights and automation peers networking.
📕 Curious on our agenda? Wait no more!
10:00 Welcome note - UiPath Community in Dubai
Lovely Sinha, UiPath Community Chapter Leader, UiPath MVPx3, Hyper-automation Consultant, First Abu Dhabi Bank
10:20 A UiPath cross-region MEA overview
Ashraf El Zarka, VP and Managing Director MEA, UiPath
10:35: Customer Success Journey
Deepthi Deepak, Head of Intelligent Automation CoE, First Abu Dhabi Bank
11:15 The UiPath approach to GenAI with our three principles: improve accuracy, supercharge productivity, and automate more
Boris Krumrey, Global VP, Automation Innovation, UiPath
12:15 To discover how Marc Ellis leverages tech-driven solutions in recruitment and managed services.
Brendan Lingam, Director of Sales and Business Development, Marc Ellis
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Elizabeth Buie - Older adults: Are we really designing for our future selves?
DeepFake Detection: Challenges, Progress and Hands-on Demonstration of Technology
1. DeepFake Detection
Challenges, Progress and
Hands-on Demonstration of Technology
Dr. Symeon (Akis) Papadopoulos – @sympap
Dr. Nikos Sarris - @nikossarris
MeVer Team @ Information Technologies Institute (ITI) /
Centre for Research & Technology Hellas (CERTH)
Online Webinar, Dec 16th 2021
Media Verification
(MeVer)
2. DeepFakes
Content, generated by deep neural
networks, that seems authentic to
human eye
Four main types of face DeepFakes:
a) Entire face synthesis, b) Attribute
manipulation, c) Identity swap,
d) Expression swap
Source: DeepFakes and Beyond: A Survey of Face Manipulation and Fake
Detection (Tolosana et al., 2020)
Tolosana, R., et al. (2020). Deepfakes and beyond: A
survey of face manipulation and fake detection.
Information Fusion, 64, 131-148.
Verdoliva, L. (2020). Media forensics and deepfakes:
an overview. IEEE Journal of Selected Topics in
Signal Processing, 14(5), 910-932.
Mirsky, Y., & Lee, W. (2021). The creation and
detection of deepfakes: A survey. ACM Computing
Surveys (CSUR), 54(1), 1-41.
reenactment
replacement
editing
generation
3. Gaining popularity
Nguyen, T. T., et al. (2019). Deep learning for
deepfakes creation and detection. arXiv preprint
arXiv:1909.11573, 1.
Ajder, H., et al. (2019).The State of DeepFakes:
Landscape, Threats and Impact. Report by
DeepTraceLabs/Sensity.
4. Potential Risks and Harms
Tackling deepfakes in European policy, Panel for the Future of Science and Technology,
Scientific Foresight Unit (STOA), July 2021
5. DeepFakes and Politics
One week after the video’s release, Gabon’s military
attempted an ultimately unsuccessful coup—the country’s
first since 1964—citing the video’s oddness as proof
something was amiss with the president.
https://www.motherjones.com/politics/2019/03/deepfake
-gabon-ali-bongo/
Mr Nguyen said he could not rule out the video being a
‘deepfake’, a term for the fairly new artificial intelligence
based technology which involves machine learning
techniques to superimpose a face on a video.
https://www.sbs.com.au/news/a-gay-sex-tape-is-threatening-
to-end-the-political-careers-of-two-men-in-malaysia
6. DeepFake Quality Rapidly Improving
https://twitter.com/goodfellow_ian/status/1084973596236144640
2021
Masood, M., Nawaz, M., Malik, K. M., Javed, A., & Irtaza, A. (2021). Deepfakes Generation and Detection: State-of-
the-art, open challenges, countermeasures, and way forward. arXiv preprint arXiv:2103.00484.
Karras, T., Aittala, M., Laine, S., Härkönen, E., Hellsten, J., Lehtinen, J., & Aila, T. (2021, May). Alias-free generative
adversarial networks. In Thirty-Fifth Conference on Neural Information Processing Systems.
7. A New Level of Realism
• Created by Chris Ume, a VFX specialist
• Not detected by any of the commercial
deepfake detection services
• Not discernible by human inspection
• Potential for misleading
but to date barriers are still high
• a lot of expertise, skill and time
• an impersonator who looks like the target
(Miles Fisher)
https://www.theverge.com/2021/3/5/22314980/tom-cruise-
deepfake-tiktok-videos-ai-impersonator-chris-ume-miles-fisher
8. Common DF Neural Network Architectures
Mirsky, Y., & Lee, W. (2021). The Creation and Detection of Deepfakes: A Survey. ACM Computing Surveys, 54(1), 1-41.
9. DeepFake Creation Pipeline and Tools
Mirsky, Y., & Lee, W. (2021). The Creation
and Detection of Deepfakes: A Survey. ACM
Computing Surveys, 54(1), 1-41.
faceswap.dev
https://github.com/iperov/DeepFaceLab
zaodownload.com malavida.com/en/soft/fakeapp
hey.reface.ai
facemagic.ai
https://generated.photos/face-generator
10. Signs of a DeepFake (in 2021)
• Different kinds of
artifacts
• Blurry areas around lips,
hair, earlobs
• Lack of symmetry
• Lighting inconsistencies
• Fuzzy background
• Flickering (in video)
https://apnews.com/article/bc2f19097a4c4fffaa00de6770b8a60d
11. DF Landscape: Detection Approaches
PHYSIOLOGICAL
SIGNALS
Blinking
information
Corneal specular
highlights
Photo-
plethysmography
ARTIFACT BASED
DETECTION
3D head pose
features
Limited resolution /
blurring
Local artifacts
(eyes, teeth, etc.)
Face X-Ray
(blending artifacts)
DEEP LEARNING
ARCHITECTURES
MesoNet XceptionNet
Capsule
Networks
Recurrent
Convolutions
FREQUENCY
DOMAIN
Local frequency
statistics
Spectral
distribution
Two-stream
approaches
Attention Nets
(Transformers)
12. The DF Battleground
• DeepFake generation and detection
offer a naturally adversarial setting.
• A recent survey (Feb 2021) analyzed
70 generation and 108 detection
methods and linked them if a
detection method tried to detect
media from a given generator.
• Analysis indicates the fast evolution
of this field.
Juefei-Xu, F., Wang, R., Huang, Y., Guo, Q., Ma, L., & Liu, Y.
(2021). Countering Malicious DeepFakes: Survey,
Battleground, and Horizon. arXiv preprint arXiv:2103.00218.
14. DeepFake Detection Challenge
• Goal: detect videos with facial or voice manipulations
• 2,114 teams participated in the challenge
• Log Loss error evaluation on public and private validation sets
• Public evaluation contained videos with similar transformations as the
training set
• Private evaluation contained organic videos and videos with unknown
transformations from the Internet
Source: https://www.kaggle.com/c/deepfake-detection-challenge
15. Performance of SotA methods on DFDC
The DFDC highlights the
generalization challenge
faced by SotA methods.
public set
hidden set
Kim, M., Tariq, S., & Woo, S. S. (2021). FReTAL: Generalizing Deepfake Detection using Knowledge Distillation and Representation Learning. In
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1001-1012).
Accuracy in cross forgery experiments (FF++ HQ)
Method DF F2F FS NT
Xception (DF) 99.41 56.05 49.93 66.32
Xception (F2F) 68.55 98.64 50.55 54.81
Xception (FS) 49.89 54.15 98.36 50.74
Xception (NT) 50.05 57.49 50.01 99.88
Accuracy in cross dataset experiments
Method FF++ HQ CELEB-DF
Xception (FF++ HQ) 95.60 73.01
16. The MeVer DeepFake Detection Service
• R&D started in the end of 2019
• Participation in DeepFake Detection Challenge in Spring 2020
• Ranked among top 5% of solutions
• Alpha version internally released in Summer 2020
• Internally tested and evaluated by WeVerify partners in eight cycles
and continuously refined
• Version 1.0.0 released in November 2021
• Addition of network trained on more realistic datasets
• Available as a standalone service and via third party
applications: a) Truly Media, b) WeVerify plugin (soon)
20. Overview of Service
Input Images/Videos Pre-Processing Deep Learning Post-Processing Results / UI
- Shot segmentation
- 64 frames per shot
- Face detection
- Face filtering
- Face clustering per
shot and filtering
P. Charitidis, G. Kordopatis-Zilos, S. Papadopoulos and I. Kompatsiaris. “Investigating
the Impact of Pre-processing and Prediction Aggregation on the DeepFake Detection
Task”. In Proceedings of the Truth and Trust Online, 2020.
21. Overview of Service
Input Images/Videos Pre-Processing Deep Learning Post-Processing Results / UI
- Ensemble of models
(EfficientNet +
Transformers)
- Trained on DFDC
(120K videos) and
WildDeepFake (7314
videos) datasets
- BCE / InfoNCE loss
- DF scores per face
22. Overview of Service
Input Images/Videos Pre-Processing Deep Learning Post-Processing Results / UI
- Average DF scores
per face cluster
- Final prediction is
the maximum face
DF score
- Result preparation
24. Limitations in Detection
Hard to spot very
realistic manipulations
from methods that
involve manual tuning
and post-processing.
Current version cannot
detect manipulations in fully
synthetic faces (e.g.
StyleGAN2,
thispersondoesnotexist.com).
Low resolution faces may be falsely
presumed as DeepFakes.
25. Challenges
• Computational resources (both for training and for serving requests)
• Making the User Interface easy to understand
• Defend against adversarial approaches
• Generalization!
• Keeping up-to-date with new generation models/methods/tools
continuously enrich training dataset
26. Current Trends
Generate own DF and Use for
Training
Attention-based and Patch-
level Consistency Analysis
Metric and Contrastive
Learning
Domain Adaptation and
Knowledge Distillation
27. Next Steps
• New approaches
• Knowledge Distillation: from simple teacher-student pairs to group teaching setups
• Contrastive Learning: investigate decorrelated representations
• Practical considerations
• Usability
• Efficiency
• Maintenance (new training data, model adaptation, etc.)
• Transparency and Robustness
• Creating a model card for the service (modelcards.withgoogle.com)
• Benchmark service robustness with ART (github.com/Trusted-AI/adversarial-
robustness-toolbox)
28. Our DeepFake Detection Team
Akis: MeVer
leader
Nikos: MeVer senior
researcher
Panagiotis: service/API
development
Lazaros: front end
development
Spiros: DeepFake detection
and service development
Pantelis: GAN
detection
George: Deep Learning
research lead
Olga: technical
support
29. Thank you!
Dr. Symeon Papadopoulos
papadop@iti.gr
@sympap
Dr. Nikos Sarris
nsarris@iti.gr
@nikossarris
Media Verification (MeVer)
https://mever.iti.gr/
@meverteam