Webinar talk in the context of the AI4EU Web Cafe. Recording of the talk available on: https://youtu.be/wY1rvseH1C8
Deepfakes have emerged for some time now as one of the largest Internet threats, and even though their primary use so far has been the creation of pornographic content, the risk of them being abused for disinformation purposes is growing by the day. Deepfake creation approaches and tools are continuously improving in terms of result quality and ease of use by non-experts, and accordingly the amount of deepfake content on the Internet is quickly growing. For that reason, approaches for deepfake detection are a valuable tool for media companies, social media platforms and ultimately citizens to help them tell authentic from deepfake generated content. In this presentation, I will be presenting a short overview of the developments in the field of deepfake detection, and present our lessons learned from working on the problem in the context of the Deepfake Detection Challenge and from developing a service for the H2020 WeVerify project.
Deepfake technology uses artificial intelligence to manipulate or generate visual and audio content where individuals can be inserted into videos and images. This document discusses the origin and development of deepfakes, including early uses on Reddit and later mobile apps. It outlines advantages like training videos but also significant disadvantages such as creating fake identities for politics or pornography without consent. The document provides tips on spotting deepfakes by looking for unnatural facial expressions, movements, or image qualities that seem manipulated.
Deepfakes - How they work and what it means for the futureJarrod Overson
Deepfakes originally started as cheap costing but believable video effects and have expanded into AI-generated content of every format. This session dove into the state of deepfakes and how the technology highlights an exciting but dangerous future.
This document discusses deepfakes, which are synthetic media that uses artificial intelligence to replace a person's face or body with someone else's. It describes the origin of deepfakes from a Reddit community in 2017 and how applications now allow users to easily create and share manipulated videos. The document explains that deepfakes work using autoencoders and generative adversarial networks to learn from data and generate new realistic images and videos. It also covers methods for detecting deepfakes and discusses both the potential positive and negative applications of this emerging technology.
Although manipulations of visual and auditory media are as old as the media themselves, the recent entrance of deepfakes has marked a turning point in the creation of fake content. Powered by latest technological advances in AI and machine learning, they offer automated procedures to create fake content that is harder and harder to detect to human observers. The possibilities to deceive are endless, including manipulated pictures, videos and audio, that will have large societal impact. Because of this, organizations need to understand the inner workings of the underlying techniques, as well as their strengths and limitations. This article provides a working definition of deepfakes together with an overview of the underlying technology. We classify different deepfake types: photo (face- and body-swapping), audio (voice-swapping, text to speech), video (face-swapping, face-morphing, full body puppetry) and audio & video (lip-synching), and identify risks and opportunities to help organizations think about the future of deepfakes. Finally, we propose the R.E.A.L. framework to manage deepfake risks: Record original content to assure deniability, Expose deepfakes early, Advocate for legal protection and Leverage trust to counter credulity. Following these principles, we hope that our society can be more prepared to counter the deepfake tricks as we appreciate its treats.
This document discusses deepfakes, including what they are, their history, present uses, future challenges, and consequences. Deepfakes use deep learning techniques like GANs to manipulate images and audio to deceive viewers into thinking something is real when it is actually fake. While initially developed by researchers, open-source tools now allow anyone to generate deepfakes. The future poses challenges around reducing training data needs, improving temporal coherence in videos, and preventing identity leakage, among other issues. Deepfakes could potentially target politicians, actors and public figures to manipulate perceptions. Prevention strategies include developing counter-AI techniques, using blockchain, and raising awareness.
Deepfakes use deep learning techniques to manipulate faces in images and videos, commonly swapping one person's face for another's. This technique has become widespread due to large public databases, advances in deep learning that automate editing, and apps that allow amateurs to create fakes. While detection methods have improved, fully foolproof detection remains elusive as fakes evolve. The document outlines four main facial manipulation techniques - entire face synthesis, identity swap, attribute manipulation, and expression swap - and discusses challenges in detecting fakes under each. It concludes that more research is still needed, particularly to detect fakes that have been modified to evade existing detection methods.
DeepFake Detection: Challenges, Progress and Hands-on Demonstration of Techno...Symeon Papadopoulos
Slides accompanying an online webinar on DeepFake Detection and a hands-on demonstration of the MeVer DeepFake Detection service. The webinar is supported by the US-Paris Tech Challenge award for our work on the InVID-WeVerify plugin.
Deepfake technology uses artificial intelligence to manipulate or generate visual and audio content where individuals can be inserted into videos and images. This document discusses the origin and development of deepfakes, including early uses on Reddit and later mobile apps. It outlines advantages like training videos but also significant disadvantages such as creating fake identities for politics or pornography without consent. The document provides tips on spotting deepfakes by looking for unnatural facial expressions, movements, or image qualities that seem manipulated.
Deepfakes - How they work and what it means for the futureJarrod Overson
Deepfakes originally started as cheap costing but believable video effects and have expanded into AI-generated content of every format. This session dove into the state of deepfakes and how the technology highlights an exciting but dangerous future.
This document discusses deepfakes, which are synthetic media that uses artificial intelligence to replace a person's face or body with someone else's. It describes the origin of deepfakes from a Reddit community in 2017 and how applications now allow users to easily create and share manipulated videos. The document explains that deepfakes work using autoencoders and generative adversarial networks to learn from data and generate new realistic images and videos. It also covers methods for detecting deepfakes and discusses both the potential positive and negative applications of this emerging technology.
Although manipulations of visual and auditory media are as old as the media themselves, the recent entrance of deepfakes has marked a turning point in the creation of fake content. Powered by latest technological advances in AI and machine learning, they offer automated procedures to create fake content that is harder and harder to detect to human observers. The possibilities to deceive are endless, including manipulated pictures, videos and audio, that will have large societal impact. Because of this, organizations need to understand the inner workings of the underlying techniques, as well as their strengths and limitations. This article provides a working definition of deepfakes together with an overview of the underlying technology. We classify different deepfake types: photo (face- and body-swapping), audio (voice-swapping, text to speech), video (face-swapping, face-morphing, full body puppetry) and audio & video (lip-synching), and identify risks and opportunities to help organizations think about the future of deepfakes. Finally, we propose the R.E.A.L. framework to manage deepfake risks: Record original content to assure deniability, Expose deepfakes early, Advocate for legal protection and Leverage trust to counter credulity. Following these principles, we hope that our society can be more prepared to counter the deepfake tricks as we appreciate its treats.
This document discusses deepfakes, including what they are, their history, present uses, future challenges, and consequences. Deepfakes use deep learning techniques like GANs to manipulate images and audio to deceive viewers into thinking something is real when it is actually fake. While initially developed by researchers, open-source tools now allow anyone to generate deepfakes. The future poses challenges around reducing training data needs, improving temporal coherence in videos, and preventing identity leakage, among other issues. Deepfakes could potentially target politicians, actors and public figures to manipulate perceptions. Prevention strategies include developing counter-AI techniques, using blockchain, and raising awareness.
Deepfakes use deep learning techniques to manipulate faces in images and videos, commonly swapping one person's face for another's. This technique has become widespread due to large public databases, advances in deep learning that automate editing, and apps that allow amateurs to create fakes. While detection methods have improved, fully foolproof detection remains elusive as fakes evolve. The document outlines four main facial manipulation techniques - entire face synthesis, identity swap, attribute manipulation, and expression swap - and discusses challenges in detecting fakes under each. It concludes that more research is still needed, particularly to detect fakes that have been modified to evade existing detection methods.
DeepFake Detection: Challenges, Progress and Hands-on Demonstration of Techno...Symeon Papadopoulos
Slides accompanying an online webinar on DeepFake Detection and a hands-on demonstration of the MeVer DeepFake Detection service. The webinar is supported by the US-Paris Tech Challenge award for our work on the InVID-WeVerify plugin.
IRJET - Deepfake Video Detection using Image Processing and Hashing ToolsIRJET Journal
This document discusses a method for detecting deepfake videos using image processing and hashing tools. Deepfake videos are digital videos that have been manipulated, often using machine learning, to deceive viewers. The proposed method uses Django tools and the MD5 hashing algorithm to analyze sample deepfake videos and detect manipulations. It aims to provide an easy and affordable way to identify deepfakes. The document provides background on how deepfakes are generated using techniques like autoencoders and generative adversarial networks. It also discusses potential applications and issues related to deepfakes, such as their use in pornography, politics, and compromising forensic evidence.
Deepfake detection models require clean training data to generalize well. The document discusses preprocessing training data by filtering out false detections from face extraction. This improved log loss error on evaluation datasets for models trained with the preprocessed data. However, deepfake detection remains challenging due to limited generalization, overfitting, and the broad scope of possible manipulations. The importance of preprocessing training data and methods to address challenges are discussed.
The "Big Data Analytics and its Use by Apple" presentation provides an overview of how Apple harnesses big data analytics to gain insights, drive innovation, and enhance business performance. It explores Apple's strategic use of data analytics in areas such as product development, customer experience, and operational efficiency, showcasing the value of data-driven decision-making in one of the world's leading technology companies.
DEEPFAKE DETECTION TECHNIQUES: A REVIEWvivatechijri
Noteworthy advancements in the field of deep learning have led to the rise of highly realistic AI generated fake videos, these videos are commonly known as Deepfakes. They refer to manipulated videos, that are generated by sophisticated AI, that yield formed videos and tones that seem to be original. Although this technology has numerous beneficial applications, there are also significant concerns about the disadvantages of the same. So there is a need to develop a system that would detect and mitigate the negative impact of these AI generated videos on society. The videos that get transferred through social media are of low quality, so the detection of such videos becomes difficult. Many researchers in the past have done analysis on Deepfake detection which were based on Machine Learning, Support Vector Machine and Deep Learning based techniques such as Convolution Neural Network with or without LSTM .This paper analyses various techniques that are used by several researchers to detect Deepfake videos.
The Rise of Deep Fake Technology: A Comprehensive Guidefindeverything
In this guide, we go through into the emergence of deep fake technology, an innovative artificial intelligence (AI) technique that utilizes complex deep learning algorithms to fabricate manipulated videos or images with a realistic appearance. While this cutting-edge technology has the potential to revolution the entertainment and marketing industries, it also poses a significant threat to national security, individual privacy, and the truth of information. Our comprehensive analysis explores the difficulties of deep fake technology, its diverse applications, the potential benefits and drawbacks, and its profound impact on various industries.
Deepfakes are a form of synthetic media that use artificial intelligence and machine learning algorithms to create fake images, videos, or audio recordings that appear to be real. They are created by manipulating or combining existing content to produce a realistic result.
The “deepfake” phenomenon — using machine learning to generate synthetic video, audio and text content — is an ominous example of how quickly new technologies can be diverted from their original purposes. Month by month, it is becoming easier and cheaper to create fakes that are increasingly difficult to distinguish from genuine artefacts.
SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)SSII
This document provides an overview of deepfake generation and detection. It begins with an introduction to the author and their background and research interests. The rest of the document is outlined as follows: definitions of deepfakes, various deepfake generation techniques including face synthesis, manipulation, reenactment and swapping, and an overview of deepfake detection methods including commonly used datasets, image-based and video-based detection approaches.
The document discusses generative models and their applications in artificial intelligence. Generative adversarial networks (GANs) use two neural networks, a generator and discriminator, that compete against each other. The generator learns to generate new data that looks real by fooling the discriminator, while the discriminator learns to better identify real from fake data. GANs have been used for tasks like image generation and neural style transfer. They show potential to generate art, music and other creative forms through machine learning.
This document describes a deep learning algorithm to classify videos as deepfakes or authentic. It discusses deepfakes, how they are created, the system architecture including data preprocessing, a ResNext-50 model architecture with LSTM and training workflow. Results show models trained on different datasets and frame sequences achieving accuracies from 84% to 98%. The project uses PyTorch and Django with Google Cloud Platform for computing.
1) Deep learning is a type of machine learning that uses neural networks with many layers to learn representations of data with multiple levels of abstraction.
2) Deep learning techniques include unsupervised pretrained networks, convolutional neural networks, recurrent neural networks, and recursive neural networks.
3) The advantages of deep learning include automatic feature extraction from raw data with minimal human effort, and surpassing conventional machine learning algorithms in accuracy across many data types.
Deepfakes are synthetic media that uses artificial intelligence to realistically manipulate images and videos by replacing a person's face with another. The term is a combination of "deep learning" and "fake". While deepfake technology was initially developed for entertainment purposes like special effects, it can also be used to impersonate people, create realistic simulations for training, and generate fake content for social media. However, there are disadvantages like using deepfakes for blackmail, spreading misinformation, and lack of authenticity which is why regulation of this technology is important.
Deepfakes are artificially generated videos or images that manipulate real people into appearing to say or do something they did not. They can be created quickly using freely available software on a standard gaming PC by training generative adversarial networks on source images or video of a person's face and the target face to map and swap. Detection of faces and landmarks allows the AI to render convincing fakes without needing artistic talent. Future sessions will discuss the ethics of deepfakes and how to detect manipulated media.
A non-technical overview of Large Language Models, exploring their potential, limitations, and customization for specific challenges. While this deck is tailored for an audience from the financial industry in mind, its content remains broadly applicable.
(Note: Discover a slightly updated version of this deck at slideshare.net/LoicMerckel/introduction-to-llms.)
Digital image forgery involves altering images through techniques like retouching, splicing, and cloning. Retouching enhances or reduces image features. Splicing combines fragments from multiple images to form new images. Cloning copies and pastes parts of a single image to duplicate or conceal objects. Forgery detection analyzes images passively for traces left during processing, or actively uses hidden digital watermarks or signatures embedded during acquisition to verify an image's source and detect modifications.
(2017/06)Practical points of deep learning for medical imagingKyuhwan Jung
This document provides an overview of deep learning and its applications in medical imaging. It discusses key topics such as the definition of artificial intelligence, a brief history of neural networks and machine learning, and how deep learning is driving breakthroughs in tasks like visual and speech recognition. The document also addresses challenges in medical data analysis using deep learning, such as how to handle limited data or annotations. It provides examples of techniques used to address these challenges, such as data augmentation, transfer learning, and weakly supervised learning.
Learn the fundamentals of Deep Learning, Machine Learning, and AI, how they've impacted everyday technology, and what's coming next in Artificial Intelligence technology.
Generative AI and Security (1).pptx.pdfPriyanka Aash
Generative AI and Security Testing discusses generative AI, including its definition as a subset of AI focused on generating content similar to human creations. The document outlines the evolution of generative AI from artificial neural networks to modern models like GPT, GANs, and VAEs. It provides examples of different types of generative AI like text, image, audio, and video generation. The document proposes potential uses of generative AI like GPT for security testing tasks such as malware generation, adversarial attack simulation, and penetration testing assistance.
Unmasking deepfakes: A systematic review of deepfake detection and generation...Araz Taeihagh
Due to the fast spread of data through digital media, individuals and societies must assess the reliability of information. Deepfakes are not a novel idea but they are now a widespread phenomenon. The impact of deepfakes and disinformation can range from infuriating individuals to affecting and misleading entire societies and even nations. There are several ways to detect and generate deepfakes online. By conducting a systematic literature analysis, in this study we explore automatic key detection and generation methods, frameworks, algorithms, and tools for identifying deepfakes (audio, images, and videos), and how these approaches can be employed within different situations to counter the spread of deepfakes and the generation of disinformation. Moreover, we explore state-of-the-art frameworks related to deepfakes to understand how emerging machine learning and deep learning approaches affect online disinformation. We also highlight practical challenges and trends in implementing policies to counter deepfakes. Finally, we provide policy recommendations based on analyzing how emerging artificial intelligence (AI) techniques can be employed to detect and generate deepfakes online. This study benefits the community and readers by providing a better understanding of recent developments in deepfake detection and generation frameworks. The study also sheds a light on the potential of AI in relation to deepfakes.
Deepfake AI has emerged as an enthralling and troubling topic in this age of rapid technological advancement. Deepfake AI, short for "deep learning fake artificial intelligence," is a powerful tool that manipulates and generates incredibly realistic video, audio, and textual content using artificial intelligence. This technology has far-reaching societal implications, from entertainment to politics and beyond. The purpose of this article is to provide a comprehensive and simplified understanding of deepfake AI, its implications, and potential safeguards.
1: What Is Deepfake AI?
1.1 Definition and Origins of Deepfake AI
Deepfake AI is a combination of "deep learning" and "fake," referring to AI's ability to create highly convincing fake content. Deep neural networks, which are complex mathematical models that learn from large datasets to mimic human-like behaviors, are used.
1.2 How Does Deepfake AI Work?
#image_title
Deepfake AI works in two stages:
Data Collection: It collects massive amounts of data on the target person, including images, videos, and audio recordings.
Model Training: The AI uses this data to train itself to produce realistic content by mimicking the person's mannerisms, expressions, and voice.
1.3 The Science Behind Deepfake AI
AI models, particularly deep neural networks, are used to create deepfakes. These networks learn the nuances of a person's speech patterns, facial expressions, and mannerisms by analyzing massive datasets of images and audio recordings. This knowledge serves as the foundation for creating realistic imitations.
2: Implications of Deepfake AI
2.1 Misinformation and Disinformation
Deepfake AI has the capability of disseminating false information and manipulating public perception. Deepfakes can be used by malicious actors to impersonate individuals and create fake news, jeopardizing trust in media and information sources.
2.2 Privacy Concerns
Deepfakes raise serious privacy concerns because personal data can be used to create fabricated content. Individuals' privacy may be jeopardized when their faces and voices are used without their permission.
2.3 Political Manipulation
Deepfake AI can be used to target political figures. These tampered with videos and audio recordings can be used to fabricate evidence, sway elections, and tarnish reputations.
2.4 Identity Theft
Deepfakes can be used to steal people's identities, causing significant harm. Criminals may use realistic deepfake content to create fake profiles, steal identities, or commit fraud.
3: Detecting Deepfake AI
3.1 Facial and Vocal Anomalies
Examining facial and vocal cues is frequently used to detect deepfakes. Unusual movements, blinking patterns, and inconsistent lip-syncing are red flags.
3.2 Metadata Analysis
Deepfake AI can sometimes leave digital traces in media metadata. Analyzing metadata for inconsistencies can aid in the detection of manipulated content.
3.3 AI Algorithms Development for Deepfake AI
IRJET - Deepfake Video Detection using Image Processing and Hashing ToolsIRJET Journal
This document discusses a method for detecting deepfake videos using image processing and hashing tools. Deepfake videos are digital videos that have been manipulated, often using machine learning, to deceive viewers. The proposed method uses Django tools and the MD5 hashing algorithm to analyze sample deepfake videos and detect manipulations. It aims to provide an easy and affordable way to identify deepfakes. The document provides background on how deepfakes are generated using techniques like autoencoders and generative adversarial networks. It also discusses potential applications and issues related to deepfakes, such as their use in pornography, politics, and compromising forensic evidence.
Deepfake detection models require clean training data to generalize well. The document discusses preprocessing training data by filtering out false detections from face extraction. This improved log loss error on evaluation datasets for models trained with the preprocessed data. However, deepfake detection remains challenging due to limited generalization, overfitting, and the broad scope of possible manipulations. The importance of preprocessing training data and methods to address challenges are discussed.
The "Big Data Analytics and its Use by Apple" presentation provides an overview of how Apple harnesses big data analytics to gain insights, drive innovation, and enhance business performance. It explores Apple's strategic use of data analytics in areas such as product development, customer experience, and operational efficiency, showcasing the value of data-driven decision-making in one of the world's leading technology companies.
DEEPFAKE DETECTION TECHNIQUES: A REVIEWvivatechijri
Noteworthy advancements in the field of deep learning have led to the rise of highly realistic AI generated fake videos, these videos are commonly known as Deepfakes. They refer to manipulated videos, that are generated by sophisticated AI, that yield formed videos and tones that seem to be original. Although this technology has numerous beneficial applications, there are also significant concerns about the disadvantages of the same. So there is a need to develop a system that would detect and mitigate the negative impact of these AI generated videos on society. The videos that get transferred through social media are of low quality, so the detection of such videos becomes difficult. Many researchers in the past have done analysis on Deepfake detection which were based on Machine Learning, Support Vector Machine and Deep Learning based techniques such as Convolution Neural Network with or without LSTM .This paper analyses various techniques that are used by several researchers to detect Deepfake videos.
The Rise of Deep Fake Technology: A Comprehensive Guidefindeverything
In this guide, we go through into the emergence of deep fake technology, an innovative artificial intelligence (AI) technique that utilizes complex deep learning algorithms to fabricate manipulated videos or images with a realistic appearance. While this cutting-edge technology has the potential to revolution the entertainment and marketing industries, it also poses a significant threat to national security, individual privacy, and the truth of information. Our comprehensive analysis explores the difficulties of deep fake technology, its diverse applications, the potential benefits and drawbacks, and its profound impact on various industries.
Deepfakes are a form of synthetic media that use artificial intelligence and machine learning algorithms to create fake images, videos, or audio recordings that appear to be real. They are created by manipulating or combining existing content to produce a realistic result.
The “deepfake” phenomenon — using machine learning to generate synthetic video, audio and text content — is an ominous example of how quickly new technologies can be diverted from their original purposes. Month by month, it is becoming easier and cheaper to create fakes that are increasingly difficult to distinguish from genuine artefacts.
SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)SSII
This document provides an overview of deepfake generation and detection. It begins with an introduction to the author and their background and research interests. The rest of the document is outlined as follows: definitions of deepfakes, various deepfake generation techniques including face synthesis, manipulation, reenactment and swapping, and an overview of deepfake detection methods including commonly used datasets, image-based and video-based detection approaches.
The document discusses generative models and their applications in artificial intelligence. Generative adversarial networks (GANs) use two neural networks, a generator and discriminator, that compete against each other. The generator learns to generate new data that looks real by fooling the discriminator, while the discriminator learns to better identify real from fake data. GANs have been used for tasks like image generation and neural style transfer. They show potential to generate art, music and other creative forms through machine learning.
This document describes a deep learning algorithm to classify videos as deepfakes or authentic. It discusses deepfakes, how they are created, the system architecture including data preprocessing, a ResNext-50 model architecture with LSTM and training workflow. Results show models trained on different datasets and frame sequences achieving accuracies from 84% to 98%. The project uses PyTorch and Django with Google Cloud Platform for computing.
1) Deep learning is a type of machine learning that uses neural networks with many layers to learn representations of data with multiple levels of abstraction.
2) Deep learning techniques include unsupervised pretrained networks, convolutional neural networks, recurrent neural networks, and recursive neural networks.
3) The advantages of deep learning include automatic feature extraction from raw data with minimal human effort, and surpassing conventional machine learning algorithms in accuracy across many data types.
Deepfakes are synthetic media that uses artificial intelligence to realistically manipulate images and videos by replacing a person's face with another. The term is a combination of "deep learning" and "fake". While deepfake technology was initially developed for entertainment purposes like special effects, it can also be used to impersonate people, create realistic simulations for training, and generate fake content for social media. However, there are disadvantages like using deepfakes for blackmail, spreading misinformation, and lack of authenticity which is why regulation of this technology is important.
Deepfakes are artificially generated videos or images that manipulate real people into appearing to say or do something they did not. They can be created quickly using freely available software on a standard gaming PC by training generative adversarial networks on source images or video of a person's face and the target face to map and swap. Detection of faces and landmarks allows the AI to render convincing fakes without needing artistic talent. Future sessions will discuss the ethics of deepfakes and how to detect manipulated media.
A non-technical overview of Large Language Models, exploring their potential, limitations, and customization for specific challenges. While this deck is tailored for an audience from the financial industry in mind, its content remains broadly applicable.
(Note: Discover a slightly updated version of this deck at slideshare.net/LoicMerckel/introduction-to-llms.)
Digital image forgery involves altering images through techniques like retouching, splicing, and cloning. Retouching enhances or reduces image features. Splicing combines fragments from multiple images to form new images. Cloning copies and pastes parts of a single image to duplicate or conceal objects. Forgery detection analyzes images passively for traces left during processing, or actively uses hidden digital watermarks or signatures embedded during acquisition to verify an image's source and detect modifications.
(2017/06)Practical points of deep learning for medical imagingKyuhwan Jung
This document provides an overview of deep learning and its applications in medical imaging. It discusses key topics such as the definition of artificial intelligence, a brief history of neural networks and machine learning, and how deep learning is driving breakthroughs in tasks like visual and speech recognition. The document also addresses challenges in medical data analysis using deep learning, such as how to handle limited data or annotations. It provides examples of techniques used to address these challenges, such as data augmentation, transfer learning, and weakly supervised learning.
Learn the fundamentals of Deep Learning, Machine Learning, and AI, how they've impacted everyday technology, and what's coming next in Artificial Intelligence technology.
Generative AI and Security (1).pptx.pdfPriyanka Aash
Generative AI and Security Testing discusses generative AI, including its definition as a subset of AI focused on generating content similar to human creations. The document outlines the evolution of generative AI from artificial neural networks to modern models like GPT, GANs, and VAEs. It provides examples of different types of generative AI like text, image, audio, and video generation. The document proposes potential uses of generative AI like GPT for security testing tasks such as malware generation, adversarial attack simulation, and penetration testing assistance.
Unmasking deepfakes: A systematic review of deepfake detection and generation...Araz Taeihagh
Due to the fast spread of data through digital media, individuals and societies must assess the reliability of information. Deepfakes are not a novel idea but they are now a widespread phenomenon. The impact of deepfakes and disinformation can range from infuriating individuals to affecting and misleading entire societies and even nations. There are several ways to detect and generate deepfakes online. By conducting a systematic literature analysis, in this study we explore automatic key detection and generation methods, frameworks, algorithms, and tools for identifying deepfakes (audio, images, and videos), and how these approaches can be employed within different situations to counter the spread of deepfakes and the generation of disinformation. Moreover, we explore state-of-the-art frameworks related to deepfakes to understand how emerging machine learning and deep learning approaches affect online disinformation. We also highlight practical challenges and trends in implementing policies to counter deepfakes. Finally, we provide policy recommendations based on analyzing how emerging artificial intelligence (AI) techniques can be employed to detect and generate deepfakes online. This study benefits the community and readers by providing a better understanding of recent developments in deepfake detection and generation frameworks. The study also sheds a light on the potential of AI in relation to deepfakes.
Deepfake AI has emerged as an enthralling and troubling topic in this age of rapid technological advancement. Deepfake AI, short for "deep learning fake artificial intelligence," is a powerful tool that manipulates and generates incredibly realistic video, audio, and textual content using artificial intelligence. This technology has far-reaching societal implications, from entertainment to politics and beyond. The purpose of this article is to provide a comprehensive and simplified understanding of deepfake AI, its implications, and potential safeguards.
1: What Is Deepfake AI?
1.1 Definition and Origins of Deepfake AI
Deepfake AI is a combination of "deep learning" and "fake," referring to AI's ability to create highly convincing fake content. Deep neural networks, which are complex mathematical models that learn from large datasets to mimic human-like behaviors, are used.
1.2 How Does Deepfake AI Work?
#image_title
Deepfake AI works in two stages:
Data Collection: It collects massive amounts of data on the target person, including images, videos, and audio recordings.
Model Training: The AI uses this data to train itself to produce realistic content by mimicking the person's mannerisms, expressions, and voice.
1.3 The Science Behind Deepfake AI
AI models, particularly deep neural networks, are used to create deepfakes. These networks learn the nuances of a person's speech patterns, facial expressions, and mannerisms by analyzing massive datasets of images and audio recordings. This knowledge serves as the foundation for creating realistic imitations.
2: Implications of Deepfake AI
2.1 Misinformation and Disinformation
Deepfake AI has the capability of disseminating false information and manipulating public perception. Deepfakes can be used by malicious actors to impersonate individuals and create fake news, jeopardizing trust in media and information sources.
2.2 Privacy Concerns
Deepfakes raise serious privacy concerns because personal data can be used to create fabricated content. Individuals' privacy may be jeopardized when their faces and voices are used without their permission.
2.3 Political Manipulation
Deepfake AI can be used to target political figures. These tampered with videos and audio recordings can be used to fabricate evidence, sway elections, and tarnish reputations.
2.4 Identity Theft
Deepfakes can be used to steal people's identities, causing significant harm. Criminals may use realistic deepfake content to create fake profiles, steal identities, or commit fraud.
3: Detecting Deepfake AI
3.1 Facial and Vocal Anomalies
Examining facial and vocal cues is frequently used to detect deepfakes. Unusual movements, blinking patterns, and inconsistent lip-syncing are red flags.
3.2 Metadata Analysis
Deepfake AI can sometimes leave digital traces in media metadata. Analyzing metadata for inconsistencies can aid in the detection of manipulated content.
3.3 AI Algorithms Development for Deepfake AI
Deepfakes refer to synthetic media created using advanced AI and ML techniques. What are its potential applications and implications for society at large?
Deepfakes Manipulating Reality with AI.pdfIMRAN SIDDIQ
Blogging has been a passion of mine for quite some time. I find immense joy in creating engaging content that informs, entertains, and inspires my readers. Through my blog, I aim to explore various topics related to AI, curative technologies, and their impact on our lives.
Artificial intelligence has emerged as a transformative force in today's world. It has the potential to revolutionize industries, enhance our daily lives, and solve complex problems. As an AI enthusiast, I'm constantly exploring the latest advancements, applications, and ethical considerations surrounding this field. I believe in the power of AI to drive positive change and create a better future for all.
Additionally, my curiosity extends to curative technologies, which focus on finding innovative solutions to diseases and health-related challenges. I'm fascinated by the advancements in medical research, genomics, and personalized medicine, and I strive to stay up-to-date with the latest breakthroughs. Through my blog, I aim to demystify complex medical concepts and present them in an accessible manner for my readers.
By combining my passion for blogging, AI, and curative technologies, I aim to provide valuable insights, thought-provoking discussions, and practical information to my readers. I hope to contribute to the growing dialogue surrounding these topics and create a community where like-minded individuals can engage, learn, and exchange ideas.
Join me on this exciting journey as we explore the wonders of artificial intelligence, delve into the realm of curative technologies, and uncover the potential they hold for shaping our future. Together, let's embark on a quest to understand and harness the power of these transformative fields.
Thank you for visiting my blog, and I look forward to sharing knowledge and inspiration with you!
Classification and evaluation of digital forensic toolsTELKOMNIKA JOURNAL
Digital forensic tools (DFTs) are used to detect the authenticity of digital images. Different DFTs have been developed to detect the forgery like (i) forensic focused operating system, (ii) computer forensics, (iii) memory forensics, (iv) mobile device forensics, and (v) software forensics tools (SFTs). These tools are dedicated to detect the forged images depending on the type of the applications. Based on our review, we found that in literature of the DFTs less attention is given to the evaluation and analysis of the forensic tools. Among various DFTs, we choose SFTs because it is concerned with the detection of the forged digital images. Therefore,the purpose of this study is to classify the different DFTs and evaluate the software forensic tools (SFTs) based on the different features which are present in the SFTs. In our work, we evaluate the following five SFTs, i.e.,“FotoForensics”, “JPEGsnoop”, “Ghiro”, “Forensically”, and “Izitru”, based on different features so that new research directions can be identified for the development of the SFTs.
A survey of deepfakes in terms of deep learning and multimedia forensicsIJECEIAES
Artificial intelligence techniques are reaching us in several forms, some of which are useful but can be exploited in a way that harms us. One of these forms is called deepfakes. Deepfakes is used to completely modify video (or image) content to display something that was not in it originally. The danger of deepfake technology impact on society through the loss of confidence in everything is published. Therefore, in this paper, we focus on deepfake detection technology from the view of two concepts which are deep learning and forensic tools. The purpose of this survey is to give the reader a deeper overview of i) the environment of deepfake creation and detection, ii) how deep learning and forensic tools contributed to the detection of deepfakes, and iii) finally how in the future incorporating both deep learning technology and tools for forensics can increase the efficiency of deepfakes detection.
356 Part II • Predictive AnalyticsMachine LearningFace re.docxdomenicacullison
356 Part II • Predictive Analytics/Machine Learning
Face recognition, although seemingly similar to image
recognition, is a much more complicated undertaking.
The goal of face recognition is to identify the individ-
ual as opposed to the class it belongs to (human), and
this identification task needs to be performed on a
nonstatic (i.e., moving person) 3D environment. Face
recognition has been an active research field in AI
for many decades with limited success until recently.
Thanks to the new generation of algorithms (i.e., deep
learning) coupled with large data sets and computa-
tional power, face recognition technology is starting to
make a significant impact on real-world applications.
From security to marketing, face recognition and the
variety of applications/use cases of this technology
are increasing at an astounding pace.
Some of the premier examples of face recogni-
tion (both in advancements in technology and in the
creative use of the technology perspectives) come
from China. Today in China, face recognition is a
very hot topic both from business development and
from application development perspectives. Face
recognition has become a fruitful ecosystem with
hundreds of start-ups in China. In personal and/or
business settings, people in China are widely using
and relying on devices whose security is based on
automatic recognition of their faces.
As perhaps the largest scale practical applica-
tion case of deep learning and face recognition in
the world today, the Chinese government recently
started a project known as “Sharp Eyes” that aims at
establishing a nationwide surveillance system based
on face recognition. The project plans to integrate
security cameras already installed in public places
with private cameras on buildings and to utilize AI
and deep learning to analyze the videos from those
cameras. With millions of cameras and billions of
lines of code, China is building a high-tech authori-
tarian future. With this system, cameras in some cit-
ies can scan train and bus stations as well as airports
to identify and catch China’s most wanted suspected
criminals. Billboard-size displays can show the faces
of jaywalkers and list the names and pictures of peo-
ple who do not pay their debts. Facial recognition
scanners guard the entrances to housing complexes.
An interesting example of this surveillance
system is the “shame game” (Mozur, 2018). An
intersection south of Changhong Bridge in the city of
Xiangyang previously was a nightmare. Cars drove
fast, and jaywalkers darted into the street. Then,
in the summer of 2017, the police put up cameras
linked to facial recognition technology and a big out-
door screen. Photos of lawbreakers were displayed
alongside their names and government identifica-
tion numbers. People were initially excited to see
their faces on the screen until propaganda outlets
told them that this was a form of punishment. Using
this, citizens not only became .
A Privacy-Preserving Deep Learning Framework for CNN-Based Fake Face DetectionIRJET Journal
This document presents a research paper that proposes a privacy-preserving deep learning framework for CNN-based fake face detection. The framework aims to develop a robust CNN model to accurately detect fake faces in images and videos while preserving user privacy. The researchers train their CNN model on a dataset of authentic and synthetic facial images representing techniques like deepfakes, morphing, and facial reenactment. Their evaluation shows the CNN model achieves state-of-the-art performance in fake face detection with 98% accuracy, addressing an important challenge while balancing detection capabilities with privacy concerns. The proposed approach could serve as a valuable tool for content verification, privacy protection, and ensuring trust in applications using digital media.
The Dark Side of AI: Deepfake Technology Threatens Trust | CyberPro Magazinecyberprosocial
This Deepfake technology, once reserved for experts, is now within reach of anyone with an internet connection. Professor Hany Farid of the University of California, Berkeley
Towards Secure and Interpretable AI: Scalable Methods, Interactive Visualizat...polochau
We have witnessed tremendous growth in Artificial intelligence (AI) and machine learning (ML) recently. However, research shows that AI and ML models are often vulnerable to adversarial attacks, and their predictions can be difficult to understand, evaluate and ultimately act upon.
Discovering real-world vulnerabilities of deep neural networks and countermeasures to mitigate such threats has become essential to successful deployment of AI in security settings. We present our joint works with Intel which include the first targeted physical adversarial attack (ShapeShifter) that fools state-of-the-art object detectors; a fast defense (SHIELD) that removes digital adversarial noise by stochastic data compression; and interactive systems (ADAGIO and MLsploit) that further democratize the study of adversarial machine learning and facilitate real-time experimentation for deep learning practitioners.
Finally, we also present how scalable interactive visualization can be used to amplify people’s ability to understand and interact with large-scale data and complex models. We sample from projects where interactive visualization has provided key leaps of insight, from increased model interpretability (Gamut with Microsoft Research), to model explorability with models trained on millions of instances (ActiVis deployed with Facebook), increased usability for non-experts about state-of-the-art AI (GAN Lab open-sourced with Google Brain; went viral!), and our latest work Summit, an interactive system that scalably summarizes and visualizes what features a deep learning model has learned and how those features interact to make predictions. We conclude by highlighting the next visual analytics research frontiers in AI.
=== Presenter Bio ===
Polo Chau
Associate Professor and ML Area Leader, College of Computing
Associate Director, MS Analytics
Georgia Institute of Technology
Polo Chau is an Associate Professor of Computing at Georgia Tech. He co-directs Georgia Tech's MS Analytics program. His research group bridges machine learning and visualization to synthesize scalable interactive tools for making sense of massive datasets, interpreting complex AI models, and solving real world problems in cybersecurity, human-centered AI, graph visualization and mining, and social good. His Ph.D. in Machine Learning from Carnegie Mellon University won CMU's Computer Science Dissertation Award, Honorable Mention. He received awards and grants from NSF, NIH, NASA, DARPA, Intel (Intel Outstanding Researcher), Symantec, Google, Nvidia, IBM, Yahoo, Amazon, Microsoft, eBay, LexisNexis; Raytheon Faculty Fellowship; Edenfield Faculty Fellowship; Outstanding Junior Faculty Award; The Lester Endowment Award; Symantec fellowship (twice); Best student papers at SDM'14 and KDD'16 (runner-up); Best demo at SIGMOD'17 (runner-up); Chinese CHI'18 Best paper. His research led to open-sourc
AI: The New Player in Cybersecurity (Nov. 08, 2023)Takeshi Takahashi
These slides outline how AI is influencing cybersecurity.
Note that they were used in the keynote speech at the event "Defense and Security 2023" held in Thailand on November 8, 2023.
New Research Articles 2019 September Issue International Journal of Artificia...gerogepatton
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications
Computer vision is a prominent subset of artificial intelligence that can analyse and make sense of image and video data. Dr Tian Jing, Senior Lecturer & Consultant, Artificial Intelligence Practice will expand on recent advanced computer vision developments and key use cases in the new normal, such as social distancing in surveillance, hand hygiene monitoring in healthcare and more. This talk will also demonstrate examples of practice module projects of Intelligent Sensing Systems Graduate Certificate, offered by NUS-ISS in the past semesters.
The document discusses the key concepts of the Internet of Things (IoT), including enabling technologies, internet usage trends, and evolution of the internet. It defines context, entities, and context-awareness as they relate to IoT. Examples of IoT applications are provided like smart baby monitors and smart home devices. Challenges of IoT deployment include standards agreement, security, and potential job disruption.
Youtube: https://www.youtube.com/watch?v=9JeOHyQew6M
Martha Larson, Zhuoran Liu, Simon Brugman and Zhengyu Zhao, Pixel Privacy: Increasing Image Appeal while Blocking Automatic Inference of Sensitive Scene Information. Proc. of MediaEval 2018, 29-31 October 2018, Sophia Antipolis, France.
Abstract: We introduce a new privacy task focused on images that users share online. The task benchmarks image transformation algorithms that are capable of blocking the ability of automatic classifiers to infer sensitive information in images. At the same time, the image transformations should maintain the original value of the image to the user who is sharing it, either by leaving it not obviously changed, or by enhancing it to increase its visual appeal. This year, the focus is on a set of 60 scene categories, selected from the Places365-Standard data set, that can be considered privacy sensitive.
Presented by Martha Larson
Unleashing the Potentials of Immersive Augmented Reality for Software Enginee...Leonel Merino
The document discusses the potential benefits of immersive augmented reality (AR) for software engineering. It outlines how AR could help with software evolution, comprehension, and performance awareness by overcoming issues of 3D visualization on screens. The document presents preliminary frameworks for collaboration/communication, embodiment/mediated reality, mobility/multi-device usage, and pervasiveness/privacy in AR. It suggests AR may benefit requirements engineering, software design, implementation, DevOps, testing, and maintenance by leveraging aspects like collaboration, mobility, and pervasiveness.
Broadcasting Forensics Using Machine Learning Approachesijtsrd
Broadcasting forensic is the practice of using scientific methods and techniques to analyse and authenticate Multimedia content. Over the past decade, consumer grade imaging sensors have become increasingly prevalent, generating vast quantities of images and videos that are used for various public and private communication purposes. Such applications include publicity, advocacy, disinformation, and deception, among others. This paper aims to develop tools that can extract knowledge from these visuals and comprehend their provenance. However, many images and videos undergo modification and manipulation before public release, which can misrepresent the facts and deceive viewers. To address this issue, we propose a set of forensics and counter forensic techniques that can help establish the authenticity and integrity of Multimedia content. Additionally, we suggest ways to modify the content intentionally to mislead potential adversaries. Our proposed tools are evaluated using publicly available datasets and independently organized challenges. Our results show that the forensics and counter forensic techniques can accurately identify manipulated content and can help restore the original image or video. Furthermore, in this paper demonstrate that the modified content can successfully deceive potential adversaries while remaining undetected by state of the art forensic methods. Amit Kapoor | Prof. Vinod Mahor "Broadcasting Forensics Using Machine Learning Approaches" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-7 | Issue-3 , June 2023, URL: https://www.ijtsrd.com.com/papers/ijtsrd57545.pdf Paper URL: https://www.ijtsrd.com.com/engineering/computer-engineering/57545/broadcasting-forensics-using-machine-learning-approaches/amit-kapoor
Fayin Li is seeking a full-time research position in machine learning, computer vision, and image processing. He has over 10 years of experience in these fields, including expertise in mathematics, algorithms, software development, machine learning techniques, and programming languages. He received his Ph.D from George Mason University where he conducted research on topics such as object recognition, face recognition, and motion estimation.
Recognition of Sentiment using Deep Neural Networkijtsrd
Emotion is one of the maximum essential details which determines in predicting the human nature and information the human behaviour. Though it is an easy task for human being for recognizing human’s emotion but it is not the same for a computer to understand. And so let research is being conducted to predict the behaviour correctly with higher precision and accuracy.This paper demonstrates the real time facial emotion recognition in one of the seven categories o emotion that are angry, disgust, fear, happy, neutral, sad and surprise. We are using a simple 4 layer Convolution Neural Network CNN . We also have implemented various filter and pre processing to remove the noise and also have taken care of over fitting the curve. We have tried to improve the accuracy o model by applying various filters and optimizing the data for feature extraction and obtaining the accurate data prediction. The dataset used for testing and training is FER2013 and the proposed trained model gives an accuracy of about 73 . Keyword Emotion Recognition, Convolution Neural Network CNN , pre processing, Over fitting, Optimization, features extraction. Amit Yadav | Anand Gupta | Ms. Aarushi Thusu "Recognition of Sentiment using Deep Neural Network" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-7 | Issue-1 , February 2023, URL: https://www.ijtsrd.com/papers/ijtsrd52797.pdf Paper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/52797/recognition-of-sentiment-using-deep-neural-network/amit-yadav
Face and liveness detection with criminal identification using machine learni...IAESIJAI
In the past, real-world photos have been used to train classifiers for face liveness identification since the related face presentation attacks (PA) and real-world images have a high degree of overlap. The use of deep convolutional neural networks (CNN) and real-world face photos together to identify the liveness of a face, however, has received very little study. A face recognition system should be able to identify real faces as well as efforts at faking utilizing printed or digital presentations. A true spoofing avoidance method involves observing facial liveness, such as eye blinking and lip movement. However, this strategy is rendered useless when defending against replay assaults that use video. The anti-spoofing technique consists of two modules: the ConvNet classifier module and the blinking eye module, which measure lip and eye movement. The results of the testing demonstrate that the developed module is capable of identifying various face spoof assaults, including those made with the use of posters, masks, or smartphones. To assess the convolutional features in this study adaptively fused from deep CNN produced face pictures and convolutional layers learned from real-world identification. Extensive tests using intra-database and cross-database scenarios on cutting-edge face anti-spoofing databases including CASIA, OULU, NUAA and replay-attack dataset demonstrate that the proposed solution methods for face liveness detection. The algorithm has a 94.30% accuracy rate.
Similar to Deepfakes: An Emerging Internet Threat and their Detection (20)
Deepfake Detection: The Importance of Training Data Preprocessing and Practic...Symeon Papadopoulos
Talk on the AI4Media Workshop on GANs for Media Content Generation, October 1st 2020, https://ai4media.eu/events/gan-media-generation-workshop-oct-2020/
Short panel presentation given in the context of the AI4EU WebCafe "The COVID-19 and Contact Tracing Apps" on June 23rd 2020, focusing on the problem of COVID-19 misinformation and how this could potentially affect the adoption of contact tracing apps.
Lecture given on January 28, 2019 to post-graduate students of the Computer Engineering and Media program, at the School of Journalism and Media, Aristotle University of Thessaloniki.
Presentation on the topic of sensing air-quality at city level based on Twitter data given at the IEEE Image, Video, and Multidimensional Signal Processing (IVMSP) 2018 workshop in Aristi, Greece.
Aggregating and Analyzing the Context of Social Media ContentSymeon Papadopoulos
Introduction to the Context Analysis and Aggregation service of InVID. Given at the Workshop on Content Verification Tools hosted by the journalists' association in Thessaloniki, Greece on June 6, 2018.
Summary of problems and research results on the problem of verifying multimedia content on the Internet. Includes results from the REVEAL and InVID research projects. Presented at the Technology Forum, Thessaloniki, May 16, 2018.
Presentation of web-based service developed within REVEAL and InVID on Experts’ Meeting on Digital Image Authentication and Classification, December 6, 2017.
This document summarizes research on detecting misleading content on Twitter. The researchers developed a framework that uses tweet-based and user-based features to train predictive models to classify tweets as real or fake. They collected a verification corpus of over 6,000 real and 9,500 fake tweets across 17 events for model training and testing. Experimental results showed the combined model achieved over 90% accuracy at detecting fake tweets, outperforming other methods. The researchers also created an online Tweet Verification Assistant to help fact-check tweets.
Near-Duplicate Video Retrieval by Aggregating Intermediate CNN LayersSymeon Papadopoulos
This document summarizes a research paper on near-duplicate video retrieval using features extracted from intermediate layers of convolutional neural networks. The researchers extract features from multiple layers of pretrained CNNs like AlexNet, VGGNet and GoogLeNet. They aggregate the features using two methods: vector-based aggregation that concatenates features, and layer-based aggregation that averages features within each layer. These aggregated representations are indexed and used to retrieve near-duplicate videos from a dataset. Their approach outperforms previous methods on standard evaluation metrics, achieving mean average precision of up to 0.81. The researchers also discuss expanding their work to use 3D CNNs and evaluate on larger more challenging datasets.
Tutorial for ACM Multimedia 2016, given together with Gerald Friedland, with contributions from Julia Bernd and Yiannis Kompatsiaris. The presentation covered an introduction to the problem of disclosing personal information through multimedia sharing, the associated security risks, methods for conducting multimodla inferences and technical frameworks that could help alleviate such risks.
This document summarizes research on evaluating geotagging performance using different sampling strategies on the YFCC100M dataset. The researchers tested four language models on various samples, including uniform geographic and user sampling, text-based sampling selecting images with more tags, text diversity sampling grouping images by MinHash, and focused, ambiguous, and visual sampling. Performance was measured by precision within 1km and median distance error. Results showed performance variability depending on the sampling strategy and language model training data.
This document summarizes a team's participation in the MediaEval 2015 Workshop on Retrieving Diverse Social Images. Their approach used a supervised maximal marginal relevance method (sMMR) to jointly optimize relevance and diversity when retrieving images. They tested sMMR using different combinations of visual and textual features. sMMR builds a refined result set incrementally by selecting images that score highest based on relevance to the query and diversity from images already selected. The team trained relevance classifiers using ground truth image labels for queries. Their work was supported by the USEMP project and they provided more details in a poster session.
Presentation of the joint participation between CERTH and CEA LIST in the 2015 edition of the MediaEval Placing Task in Wurzen, Germany, September 14-15, 2015.
Presentation of the task overview in MediaEval 2015, Wurzen, Germany. Verifying Multimedia Use is about detecting tweets that carry misleading information and content.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 6
Deepfakes: An Emerging Internet Threat and their Detection
1. Deepfakes: An Emerging Internet Threat
and their Detection
Dr. Symeon (Akis) Papadopoulos – @sympap
MeVer Team @ Information Technologies Institute (ITI) /
Centre for Research & Technology Hellas (CERTH)
In collaboration with Polychronis Charitidis, George Kordopatis-Zilos,
Nikos Sarris and Yiannis Kompatsiaris
AI4EU Café, Dec 16th 2020
Media Verification
(MeVer)
2. DeepFakes: Definition
• Content, generated by deep neural
networks, that seems authentic to
human eye
• Most common form: generation and
manipulation of human face
Source: https://en.wikipedia.org/wiki/Deepfake
Source: https://www.youtube.com/watch?v=iHv6Q9ychnA
Source: Media Forensics and DeepFakes: an overview
3. 1
DeepFakes in the News
3
DeepFakes Detection
4
Our Lessons Learned
2
DeepFake Basics
4. 1
DeepFakes in the News
3
DeepFakes Detection
4
Our Lessons Learned
2
DeepFake Basics
5. State of DeepFakes
• Quick increase of DF content online
• Majority of DF content is pornographic
• Significant reach
• Vast majority of subjects in DF
pornographic videos are actresses and
musicians
• Subjects in YT DF videos also include
politicians and business people
Ajder, H., Patrini, G., Cavalli, F., Cullen, L. (2019).The State of DeepFakes:
Landscape, Threats and Impact. Report by DeepTraceLabs/Sensity.
6. Gaining popularity
Nguyen, T. T., et al. (2019). Deep learning for
deepfakes creation and detection. arXiv preprint
arXiv:1909.11573, 1.
Ajder, H., et al. (2019).The State of DeepFakes:
Landscape, Threats and Impact. Report by
DeepTraceLabs/Sensity.
7. https://www.wired.com/story/telegram-still-hasnt-removed-an-ai-bot-thats-abusing-women/
“The bot uses a version of the
DeepNude AI tool, which was
originally created in 2019, to
remove clothes from photos of
women and generate their
body parts. Anyone can easily
use the bot to generate
images. More than 100,000
such images have been publicly
shared by the bot in several
Telegram chat channels
associated with it. ”
8. DeepFakes and Privacy Risks
https://www.androidpolice.com/2019/09/03/zao-deepfake-app-privacy/
The app quickly garnered negative
press focusing on privacy concerns
in heavily surveilled China, of all
places. Reporters cited the user
agreement, which gives the
company behind ZAO the right to
use any imagery created on the
app for free and for all purposes,
with no option to retreat from the
consent once accepted. ZAO has
since responded and updated the
agreement, writing that it changed
the controversial passages and that
it would remove any user-deleted
content from its servers, too…..
9. Reface: The Normalization of DeepFakes
“The app normalises
deepfakes, and not
everyone understands the
concerns arising from them
because not everyone has
the digital know-how to
differentiate what is real
and what isn’t,” Apurva
Singh, a privacy expert and
volunteer legal counsel at
Software Freedom Law
Center, India….
https://www.vice.com/en/article/wxqkbn/viral-reface-app-
going-to-make-deepfake-problem-worse
10. Fake Identities
But Katie Jones doesn’t exist, The
Associated Press has determined.
Instead, the persona was part of a
vast army of phantom profiles
lurking on the professional
networking site LinkedIn. And
several experts contacted by the
AP said Jones’ profile picture
appeared to have been created by
a computer program….
https://apnews.com/article/bc2f19097a4c4ff
faa00de6770b8a60d
11. DeepFakes and Politics
One week after the video’s release, Gabon’s military
attempted an ultimately unsuccessful coup—the country’s
first since 1964—citing the video’s oddness as proof
something was amiss with the president.
https://www.motherjones.com/politics/2019/03/deepfake
-gabon-ali-bongo/
Mr Nguyen said he could not rule out the video being a
‘deepfake’, a term for the fairly new artificial intelligence
based technology which involves machine learning
techniques to superimpose a face on a video.
https://www.sbs.com.au/news/a-gay-sex-tape-is-threatening-
to-end-the-political-careers-of-two-men-in-malaysia
13. 1
DeepFakes in the News
3
DeepFakes Detection
4
Our Lessons Learned
2
DeepFake Basics
14. Manipulation types
Facial manipulations can
be categorised in four
main different groups:
• Entire face synthesis
• Attribute manipulation
• Identity swap
• Expression swap
Source: DeepFakes and Beyond: A Survey of Face Manipulation and Fake
Detection (Tolosana et al., 2020)
Tolosana, R., et al. (2020). Deepfakes and beyond:
A survey of face manipulation and fake
detection. arXiv preprint arXiv:2001.00179.
Verdoliva, L. (2020). Media forensics and deepfakes:
an overview. arXiv preprint arXiv:2001.06564.
Mirsky, Y., & Lee, W. (2020). The Creation and
Detection of Deepfakes: A Survey. arXiv preprint
arXiv:2004.11138.
reenactmentreplacement
editing
15. Basic Principle: Encoder/Decoder Scheme
https://jonathan-hui.medium.com/how-deep-learning-fakes-videos-deepfakes-and-how-to-detect-it-c0b50fbf7cb9
Face angle, skin tone,
facial expression, lighting
Person 1-specific features
Person 2-specific features
16. Common DF Neural Network Architectures
Mirsky, Y., & Lee, W. (2020). The Creation and Detection of Deepfakes: A Survey. arXiv
preprint arXiv:2004.11138.
17. DeepFake Creation Pipeline
Mirsky, Y., & Lee, W. (2020). The Creation and Detection of Deepfakes: A Survey. arXiv
preprint arXiv:2004.11138.
20. 1
DeepFakes in the News
3
DeepFakes Detection
4
Our Lessons Learned
2
DeepFake Basics
21. Signs of a DeepFake (in 2020)
• Different kinds of
artifacts
• Blurry areas around lips,
hair, earlobs
• Lack of symmetry
• Lighting inconsistencies
• Fuzzy background
• Flickering (in video)
https://apnews.com/article/bc2f19097a4c4fffaa00de6770b8a60d
22. Test your Skills
• Which face is real? https://www.whichfaceisreal.com/
• Can you spot the deepfake video? https://detectfakes.media.mit.edu/
23. Detection using physiological features
• Exploiting the eye blinking information which is a physiological signal
that is not well presented in the synthesized fake videos.
Y. Li, M. Chang, and S. Lyu, “In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking,” in Proc. IEEE International
Workshop on Information Forensics and Security, 2018.
24. Detection using physiological features
• Based on observing subtle changes of colour and motion in RGB
videos, that enable methods such as colour based remote
photoplethysmography (rPPG or iPPG)
Ciftci, U. A., Demir, I., & Yin, L. (2020). Fakecatcher: Detection of synthetic portrait videos using biological signals. IEEE Transactions
on Pattern Analysis and Machine Intelligence.
25. Detection using head pose features
• Exploiting the errors that can be introduced by deepfake generation
methods in 3D head poses.
Yang, X., Li, Y., & Lyu, S. (2019, May). Exposing deep fakes using inconsistent head poses. In ICASSP 2019-2019 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP) (pp. 8261-8265). IEEE.
26. Artifact-based detection methods
• Exploiting artifacts from specific generation methods
Matern, F., Riess, C., & Stamminger, M. (2019, January). Exploiting
visual artifacts to expose deepfakes and face manipulations. In 2019
IEEE Winter Appl. of Computer Vision Workshops (WACVW) (pp. 83-92)
Visual artifacts
Limited resolution
Li, Y., & Lyu, S. (2019). Exposing DeepFake Videos By Detecting
Face Warping Artifacts. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition Workshops (pp. 46-52).
27. Artifact-based detection methods
• Exploits the deepfake
generation step of
blending the altered face
into an existing
background image
• Localizes the
manipulation region of
the face
Li, L., Bao, J., Zhang, T., Yang, H., Chen, D., Wen, F., & Guo, B. (2020). Face x-ray for more general face forgery detection. In Proc. of IEEE/CVF
Conference on Computer Vision and Pattern Recognition (pp. 5001-5010).
28. CNN-based approaches
• MesoNet
• XceptionNet
• Capsule Networks
Afchar, D., Nozick, V., Yamagishi, J., & Echizen, I. (2018, December). Mesonet: a compact facial video
forgery detection network. In 2018 IEEE International Workshop on Information Forensics and
Security (WIFS) (pp. 1-7).
Rossler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., & Nießner, M. (2019).
Faceforensics++: Learning to detect manipulated facial images. In Proceedings of the IEEE
International Conference on Computer Vision (pp. 1-11).
Nguyen, H. H., Yamagishi, J., & Echizen, I. (2019, May). Capsule-forensics: Using capsule networks to
detect forged images and videos. In ICASSP 2019-2019 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP) (pp. 2307-2311). IEEE.
29. CNN-based approaches
• Exploiting the temporal dimension using recurrent neural networks
Sabir, E., Cheng, J., Jaiswal, A., AbdAlmageed, W., Masi, I., & Natarajan, P. (2019). Recurrent Convolutional Strategies for Face Manipulation
Detection in Videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 80-87).
30. Frequency domain GAN-fake face detection
• Exploiting
• frequency-aware decomposed image components (FAD)
• local frequency statistics (LFS)
• Fusion of these features
Qian, Y., Yin, G., Sheng, L., Chen, Z., & Shao, J. (2020, August). Thinking in frequency: Face forgery detection by mining frequency-aware clues.
In European Conference on Computer Vision (pp. 86-103). Springer, Cham.
31. Frequency domain GAN-fake face detection
• Two similar approaches, exploiting the fact that common up-sampling
methods, i.e. known as up-convolution or transposed convolution,
are causing the inability of such models to reproduce spectral
distributions of natural training data correctly.
Wang, S. Y., Wang, O., Zhang, R., Owens, A., & Efros, A. A. (2020). CNN-generated images are surprisingly easy to spot... for now. In Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition (Vol. 7).
Durall, R., Keuper, M., & Keuper, J. (2020). Watch your Up-Convolution: CNN Based Generative Deep Neural Networks are Failing to Reproduce
Spectral Distributions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7890-7899).
32. Performance of SotA Approaches on FF++
Method Dataset Metric Performance
Matern et al. (2019) FF++ / DFD AUC 0.78
Li et al. (2019) FF++ / DFD AUC 0.93
Li et al. (2020) FF++ AUC 0.99
Afchar et al. (2018) FF++ (NeuralTextures) Acc 85%
Rossler et al. (2019) FF++ (all video qualities) Acc 91%
Nguyen et al. (2019) FF++ (all video qualities) Acc 92%
Masi et al. (2020) FF++ (all video qualities) Acc 94%
Ciftci et al. (2020) FF++ Acc 94%
Qi et al. (2020) FF++ (high quality) Acc 98%
Qian et al. (2020) FF++ Acc 93%
The FaceForensics++ dataset does not pose any challenge to SotA methods.
33. Performance of SotA methods on DFDC
The DFDC highlights the generalization challenge faced by SotA methods.
34. 1
DeepFakes in the News
3
DeepFakes Detection
4
Our Lessons Learned
2
DeepFake Basics
35. Context of our Research
https://weverify.eu/ https://ai4media.eu/
Innovation Action: 2018-2021
• Problem-driven: real-world testing
and issues
• Close interaction with end users
(journalists, citizens)
Research & Innovation Action: 2020-2024
• Research-driven: improve SotA and leverage
new advances in AI
• Close interaction with leading researchers
36. DeepFake Detection Challenge
• Goal: detect videos with facial or voice manipulations
• 2,114 teams participated in the challenge
• Log Loss error evaluation on public and private validation sets
• Public evaluation contained videos with similar transformations as the
training set
• Private evaluation contained organic videos and videos with unknown
transformations from the Internet
• Our final standings:
• public leaderboard: 49 (top 3%) with 0.295 Log Loss error
• private leaderboard: 115 (top 5%) with 0.515 Log Loss error
Source: https://www.kaggle.com/c/deepfake-detection-challenge
37. DFDC Dataset
• An order of magnitude bigger:
• Number of videos
• Number of frames
• Number of subjects
• Subject consent
• More deepfake generation methods
Dolhansky, B., Bitton, J., Pflaum, B., Lu, J., Howes, R., Wang,
M., & Ferrer, C. C. (2020). The DeepFake Detection
Challenge Dataset. arXiv preprint arXiv:2006.07397.
38. DeepFake Detection Challenge - dataset
• Dataset of more than 110k videos
• Approx. 20k REAL and the rest are FAKE
• FAKE videos generated from the REAL
• Models used:
• DeepFake AutoEncoder (DFAE)
• Morphable Mask faceswap (MM/NN)
• Neural Talking Heads (NTH)
• FSGAN
• StyleGAN
Dolhansky, B., Bitton, J., Pflaum, B., Lu, J., Howes, R., Wang,
M., & Ferrer, C. C. (2020). The DeepFake Detection Challenge
Dataset. arXiv preprint arXiv:2006.07397.
39. Dataset preprocessing - Issues
• Face dataset quality depends on face extraction accuracy (Dlib,
mtcnn, facenet-pytorch, Blazeface)
• Generally all face extraction libraries generate a number of false
positive detections
• Manual tuning can improve the quality of the generated dataset
Deep learning
model
Face
extraction
Frame
extraction
Video
corpus
40. Noisy data creeping in the training set
• Extracting faces with 1 fps from Kaggle DeepFake Detection Challenge dataset
videos using pytorch implementation of MTCNN face detection
• Observation: False detections are less compared to true detections in a video
41. Our “noise” filtering approach
• Compute face embeddings for each detected face in video
• Similarity calculation between all face embeddings in a video → similarity graph construction
• Nodes represent faces and two faces are connected if their similarities are greater than 0.8 (solid lines)
• Drop components smaller than N/2 (e.g. component 2)
• N is the number of frames that contain face detections (true or false).
Charitidis, P., Kordopatis-Zilos, G., Papadopoulos, S., & Kompatsiaris, Y. (2020). Investigating the impact of preprocessing and prediction
aggregation on the DeepFake detection task. Proceedings of the Conference for Truth and Trust Online (TTO), https://arxiv.org/abs/2006.07084
42. Advantages
• Simple and fast procedure
• No need for manual tuning of the face extraction settings
• Clusters of distinct faces in cases of multiple persons in the video
• This information can be utilized in various ways (e.g. predictions per face)
Faces extracted from multiple video frames
Component 1
Component 2
43. Experiments
• We trained multiple DeepFake detection models on the DFDC dataset
with and without (baseline) our proposed approach
• Three datasets: a) Celeb-DF, b) FaceForensics++, c) DFDC subset
• For evaluation we examined two aggregation approaches
• avg: prediction is the average of all face predictions
• face: prediction is the max prediction among different avg face predictions
• Results for the EfficientNet-B4 model in terms of Log loss error:
Pre-
processing
CelebDF FaceForensics++ DFDC
avg face avg face avg face
baseline 0,510 0,511 0,563 0,563 0,213 0,198
proposed 0,458 0,456 0,497 0,496 0,195 0,173
44. Our DFDC Approach - details
• Applied proposed preprocessing approach to clean the generated face dataset
• Face augmentation:
• Horizontal & vertical flip, random crop, rotation, image compression, Gaussian & motion
blurring, brightness, saturation & contrast transformation
• Trained three different models: a) EfficientNet-B3, b) EfficientNet-B4, c) I3D*
• Models trained on face level:
• I3d trained with 10 consecutive face images exploiting temporal information.
• EfficientNet models trained on single face images
• Per model:
• Added two dense layers with dropout after the backbone architecture with 256 and 1 units
• Used the sigmoid activation for the last layer
* ignoring the optical flow stream
45. Our DFDC approach – inference
pre-processing model inference post-processing
46. Lessons from other DFDC teams
• Most approaches ensemble multiple EfficientNet architectures (B3-B7) and
some of them were trained on different seeds
• ResNeXT was another architecture used by a top-performing solutions
combined with 3D architectures such as I3D, 3D ResNet34, MC3 & R2+1D
• Several approaches increased the margin of the detected facial bounding
box to further improve results.
• We used an additional margin of 20% but other works proposed a higher proportion.
• To improve generalization:
• Domain-specific augmentations: a) half face removal horizontally or vertically, b)
landmark (eyes, nose, or mouth) removal
• Mixup augmentations
47. Practical challenges
• Limited generalization
• This observation applies to most submissions. The winning team scored
0.20336 in public validation and only 0.42798 in the private (Log Loss)
• Overfitting
• The best submission in the public leaderboard scored 0.19207 but in the
private evaluation the error was 0.57468, leading to the 904-th position!
• Broad problem scope
• The term DeepFake may refer to every possible manipulation and generation
• Constantly increasing manipulation and generation techniques
• A detector is only trained with a subset of these manipulations
48. DeepFake Detection in the Wild
• Videos in the wild usually contain multiple scenes
• Only a subset of these scenes may contain DeepFakes
• Detection process might be slow for multi-shot videos (even short ones)
• Low quality videos
• Low quality faces tend to fool classifiers
• Small detected and fast-moving faces
• Usually lead to noisy predictions
• Changes in the environment
• Moving obstacles in front of the faces
• Changes in lighting
49. DeepFake Detection Service @ WeVerify
https://www.youtube.com/watch?v=cVljNV
V5VPw&ab_channel=TheFakening
50. Thank you!
Dr. Symeon Papadopoulos
papadop@iti.gr
@sympap
Media Verification (MeVer)
https://mever.iti.gr/
@meverteam
Ack. Polychronis Charitidis, George Kordopatis-Zilos,
Nikos Sarris and Yiannis Kompatsiaris