Deepfakes are a form of synthetic media that use artificial intelligence and machine learning algorithms to create fake images, videos, or audio recordings that appear to be real. They are created by manipulating or combining existing content to produce a realistic result.
Deepfakes - How they work and what it means for the futureJarrod Overson
Deepfakes originally started as cheap costing but believable video effects and have expanded into AI-generated content of every format. This session dove into the state of deepfakes and how the technology highlights an exciting but dangerous future.
Although manipulations of visual and auditory media are as old as the media themselves, the recent entrance of deepfakes has marked a turning point in the creation of fake content. Powered by latest technological advances in AI and machine learning, they offer automated procedures to create fake content that is harder and harder to detect to human observers. The possibilities to deceive are endless, including manipulated pictures, videos and audio, that will have large societal impact. Because of this, organizations need to understand the inner workings of the underlying techniques, as well as their strengths and limitations. This article provides a working definition of deepfakes together with an overview of the underlying technology. We classify different deepfake types: photo (face- and body-swapping), audio (voice-swapping, text to speech), video (face-swapping, face-morphing, full body puppetry) and audio & video (lip-synching), and identify risks and opportunities to help organizations think about the future of deepfakes. Finally, we propose the R.E.A.L. framework to manage deepfake risks: Record original content to assure deniability, Expose deepfakes early, Advocate for legal protection and Leverage trust to counter credulity. Following these principles, we hope that our society can be more prepared to counter the deepfake tricks as we appreciate its treats.
The "Big Data Analytics and its Use by Apple" presentation provides an overview of how Apple harnesses big data analytics to gain insights, drive innovation, and enhance business performance. It explores Apple's strategic use of data analytics in areas such as product development, customer experience, and operational efficiency, showcasing the value of data-driven decision-making in one of the world's leading technology companies.
Deepfakes - How they work and what it means for the futureJarrod Overson
Deepfakes originally started as cheap costing but believable video effects and have expanded into AI-generated content of every format. This session dove into the state of deepfakes and how the technology highlights an exciting but dangerous future.
Although manipulations of visual and auditory media are as old as the media themselves, the recent entrance of deepfakes has marked a turning point in the creation of fake content. Powered by latest technological advances in AI and machine learning, they offer automated procedures to create fake content that is harder and harder to detect to human observers. The possibilities to deceive are endless, including manipulated pictures, videos and audio, that will have large societal impact. Because of this, organizations need to understand the inner workings of the underlying techniques, as well as their strengths and limitations. This article provides a working definition of deepfakes together with an overview of the underlying technology. We classify different deepfake types: photo (face- and body-swapping), audio (voice-swapping, text to speech), video (face-swapping, face-morphing, full body puppetry) and audio & video (lip-synching), and identify risks and opportunities to help organizations think about the future of deepfakes. Finally, we propose the R.E.A.L. framework to manage deepfake risks: Record original content to assure deniability, Expose deepfakes early, Advocate for legal protection and Leverage trust to counter credulity. Following these principles, we hope that our society can be more prepared to counter the deepfake tricks as we appreciate its treats.
The "Big Data Analytics and its Use by Apple" presentation provides an overview of how Apple harnesses big data analytics to gain insights, drive innovation, and enhance business performance. It explores Apple's strategic use of data analytics in areas such as product development, customer experience, and operational efficiency, showcasing the value of data-driven decision-making in one of the world's leading technology companies.
deepfake
seminar
computer engineering
ppt on deepfake which uses ai and deep learning technology.with adavantages,disadvantages,intro,reference,conclusion
Deepfakes: An Emerging Internet Threat and their DetectionSymeon Papadopoulos
Webinar talk in the context of the AI4EU Web Cafe. Recording of the talk available on: https://youtu.be/wY1rvseH1C8
Deepfakes have emerged for some time now as one of the largest Internet threats, and even though their primary use so far has been the creation of pornographic content, the risk of them being abused for disinformation purposes is growing by the day. Deepfake creation approaches and tools are continuously improving in terms of result quality and ease of use by non-experts, and accordingly the amount of deepfake content on the Internet is quickly growing. For that reason, approaches for deepfake detection are a valuable tool for media companies, social media platforms and ultimately citizens to help them tell authentic from deepfake generated content. In this presentation, I will be presenting a short overview of the developments in the field of deepfake detection, and present our lessons learned from working on the problem in the context of the Deepfake Detection Challenge and from developing a service for the H2020 WeVerify project.
DeepFake Detection: Challenges, Progress and Hands-on Demonstration of Techno...Symeon Papadopoulos
Slides accompanying an online webinar on DeepFake Detection and a hands-on demonstration of the MeVer DeepFake Detection service. The webinar is supported by the US-Paris Tech Challenge award for our work on the InVID-WeVerify plugin.
The Rise of Deep Fake Technology: A Comprehensive Guidefindeverything
In this guide, we go through into the emergence of deep fake technology, an innovative artificial intelligence (AI) technique that utilizes complex deep learning algorithms to fabricate manipulated videos or images with a realistic appearance. While this cutting-edge technology has the potential to revolution the entertainment and marketing industries, it also poses a significant threat to national security, individual privacy, and the truth of information. Our comprehensive analysis explores the difficulties of deep fake technology, its diverse applications, the potential benefits and drawbacks, and its profound impact on various industries.
This is a presentation for Brandeis International Business School's Big Data II course about newer technologies using artificial intelligence, mainly the recently trendy Deepfake.
Dissecting the dangers of deepfakes and their impact on reputation Generative...CSIRO National AI Centre
At the recent Generative AI Conference - This talk defined deepfakes and the widespread damage misinformation can cause. In order to build awareness of the ethical implications of deepfakes. At the
National AI Centre, Responsible AI and Responsible AI Network
allows us to action a way to use AI that is aligned to Australia's AI ethics principles.
SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)SSII
SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)
6/10 (木) 14:30~15:00
講師:Huy H. Nguyen 氏(総合研究大学院大学/国立情報学研究所)
概要: Advances in machine learning and their interference with computer graphics allow us to easily generate high-quality images and videos. State-of-the-art manipulation methods enable the real-time manipulation of videos obtained from social networks. It is also possible to generate videos from a single portrait image. By combining these methods with speech synthesis, attackers can create a realistic video of some person saying something that they never said and distribute it on the internet. This results in loosing social trust, making confusion, and harming people’s reputation. Several countermeasures have been proposed to tackle this problem, from using hand-crafted features to using convolutional neural network. Some countermeasures use images as input and other leverage temporal information in videos. Their output could be binary (bona fide or fake) or muti-class (deepfake detection), or segmentation masks (manipulation localization). Since deepfake methods evolve rapidly, dealing with unseen ones is still a challenging problem. Some solutions have been proposed, however, this problem is not completely solved. In this talk, I will provide an overview on both deepfake generation and deepfake detection/localization. I will mainly focus on image and video domain and also introduce some audiovisual-based methods on both sides. Some open discussions and future directions are also included.
Deepfake detection is a critical and evolving field aimed at identifying and mitigating the risks associated with manipulated multimedia content created using artificial intelligence (AI) techniques. Deepfakes involve the use of advanced machine learning algorithms, particularly generative models like Generative Adversarial Networks (GANs), to create highly convincing fake videos, audio recordings, or images that can deceive viewers into believing they are genuine.
One prevalent approach to deepfake detection involves leveraging advancements in computer vision and pattern recognition. Researchers and developers employ sophisticated algorithms to analyze various visual and auditory cues that may indicate the presence of deepfake manipulation. For instance, anomalies in facial expressions, inconsistent lighting and shadows, or unnatural lip sync in videos can be indicative of deepfake content. Additionally, deepfake detectors may examine metadata, such as inconsistencies in timestamps or editing artifacts, to identify alterations in the content's authenticity.
Machine learning plays a central role in deepfake detection, with models being trained on diverse datasets that include both authentic and manipulated content. Supervised learning techniques involve training models on labeled datasets, enabling them to recognize patterns associated with deepfake manipulation. Researchers also explore unsupervised and semi-supervised learning methods, allowing detectors to identify anomalies without explicit labels for every training instance.
As the field progresses, deepfake detectors are increasingly adopting advanced neural network architectures to enhance their accuracy. Ensembling multiple models, each specialized in detecting specific types of manipulations, is another strategy employed to improve overall detection performance. Furthermore, the integration of explainable AI techniques enables better understanding of the detection process and provides insights into the features contributing to the decision-making process of the models.
Despite these advancements, deepfake detection remains a challenging task due to the constant evolution of deepfake generation techniques. Adversarial training, where detectors are trained on data that includes adversarial examples, is one method to improve robustness against sophisticated manipulation attempts. Continuous research efforts are required to stay ahead of emerging deepfake technologies and to develop detectors capable of identifying novel manipulation methods.
In conclusion, deepfake detection is a multidimensional challenge that requires a combination of computer vision, machine learning, and data analysis techniques. Researchers and practitioners are actively developing and refining methods to detect manipulated content by examining visual and auditory cues, leveraging machine learning models, and staying vigilant against evolving deepfake technologies. As the threat landscape evolves, ongoing innovati
The “deepfake” phenomenon — using machine learning to generate synthetic video, audio and text content — is an ominous example of how quickly new technologies can be diverted from their original purposes. Month by month, it is becoming easier and cheaper to create fakes that are increasingly difficult to distinguish from genuine artefacts.
DEEPFAKE DETECTION TECHNIQUES: A REVIEWvivatechijri
Noteworthy advancements in the field of deep learning have led to the rise of highly realistic AI generated fake videos, these videos are commonly known as Deepfakes. They refer to manipulated videos, that are generated by sophisticated AI, that yield formed videos and tones that seem to be original. Although this technology has numerous beneficial applications, there are also significant concerns about the disadvantages of the same. So there is a need to develop a system that would detect and mitigate the negative impact of these AI generated videos on society. The videos that get transferred through social media are of low quality, so the detection of such videos becomes difficult. Many researchers in the past have done analysis on Deepfake detection which were based on Machine Learning, Support Vector Machine and Deep Learning based techniques such as Convolution Neural Network with or without LSTM .This paper analyses various techniques that are used by several researchers to detect Deepfake videos.
IEEE EED2021 AI use cases in Computer VisionSAMeh Zaghloul
AI Use Cases in Computer Vision
Introduction and Overview about AI Use Cases in Computer Vision, to answer a basic question: “How Machines See?”, covering Neural Networks, Object detection and recognition, Content-based image retrieval, Object tracking, Image restoration, Scene reconstruction, Computer Vision Tools, Frameworks, Pretrained Models, and Public Train/Test Datasets.
With real-project examples on using Computer Vision in Egyptian Hieroglyph Alphabet recognition, Face Recognition/Matching, in addition to hands-on interactive session on Object/Image Tagging/Annotation on Videos/Images to prepare model training dataset.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/sep-2019-alliance-vitf-ucberkeley
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Shruti Agarwal, Ph.D. Candidate at U.C. Berkeley, delivers the presentation "Creating, Weaponizing,and Detecting Deep Fakes" at the Embedded Vision Alliance's September 2019 Vision Industry and Technology Forum. Agarwal explains how to use computer vision to detect "deepfakes."
A survey of deepfakes in terms of deep learning and multimedia forensicsIJECEIAES
Artificial intelligence techniques are reaching us in several forms, some of which are useful but can be exploited in a way that harms us. One of these forms is called deepfakes. Deepfakes is used to completely modify video (or image) content to display something that was not in it originally. The danger of deepfake technology impact on society through the loss of confidence in everything is published. Therefore, in this paper, we focus on deepfake detection technology from the view of two concepts which are deep learning and forensic tools. The purpose of this survey is to give the reader a deeper overview of i) the environment of deepfake creation and detection, ii) how deep learning and forensic tools contributed to the detection of deepfakes, and iii) finally how in the future incorporating both deep learning technology and tools for forensics can increase the efficiency of deepfakes detection.
deepfake
seminar
computer engineering
ppt on deepfake which uses ai and deep learning technology.with adavantages,disadvantages,intro,reference,conclusion
Deepfakes: An Emerging Internet Threat and their DetectionSymeon Papadopoulos
Webinar talk in the context of the AI4EU Web Cafe. Recording of the talk available on: https://youtu.be/wY1rvseH1C8
Deepfakes have emerged for some time now as one of the largest Internet threats, and even though their primary use so far has been the creation of pornographic content, the risk of them being abused for disinformation purposes is growing by the day. Deepfake creation approaches and tools are continuously improving in terms of result quality and ease of use by non-experts, and accordingly the amount of deepfake content on the Internet is quickly growing. For that reason, approaches for deepfake detection are a valuable tool for media companies, social media platforms and ultimately citizens to help them tell authentic from deepfake generated content. In this presentation, I will be presenting a short overview of the developments in the field of deepfake detection, and present our lessons learned from working on the problem in the context of the Deepfake Detection Challenge and from developing a service for the H2020 WeVerify project.
DeepFake Detection: Challenges, Progress and Hands-on Demonstration of Techno...Symeon Papadopoulos
Slides accompanying an online webinar on DeepFake Detection and a hands-on demonstration of the MeVer DeepFake Detection service. The webinar is supported by the US-Paris Tech Challenge award for our work on the InVID-WeVerify plugin.
The Rise of Deep Fake Technology: A Comprehensive Guidefindeverything
In this guide, we go through into the emergence of deep fake technology, an innovative artificial intelligence (AI) technique that utilizes complex deep learning algorithms to fabricate manipulated videos or images with a realistic appearance. While this cutting-edge technology has the potential to revolution the entertainment and marketing industries, it also poses a significant threat to national security, individual privacy, and the truth of information. Our comprehensive analysis explores the difficulties of deep fake technology, its diverse applications, the potential benefits and drawbacks, and its profound impact on various industries.
This is a presentation for Brandeis International Business School's Big Data II course about newer technologies using artificial intelligence, mainly the recently trendy Deepfake.
Dissecting the dangers of deepfakes and their impact on reputation Generative...CSIRO National AI Centre
At the recent Generative AI Conference - This talk defined deepfakes and the widespread damage misinformation can cause. In order to build awareness of the ethical implications of deepfakes. At the
National AI Centre, Responsible AI and Responsible AI Network
allows us to action a way to use AI that is aligned to Australia's AI ethics principles.
SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)SSII
SSII2021 [SS2] Deepfake Generation and Detection – An Overview (ディープフェイクの生成と検出)
6/10 (木) 14:30~15:00
講師:Huy H. Nguyen 氏(総合研究大学院大学/国立情報学研究所)
概要: Advances in machine learning and their interference with computer graphics allow us to easily generate high-quality images and videos. State-of-the-art manipulation methods enable the real-time manipulation of videos obtained from social networks. It is also possible to generate videos from a single portrait image. By combining these methods with speech synthesis, attackers can create a realistic video of some person saying something that they never said and distribute it on the internet. This results in loosing social trust, making confusion, and harming people’s reputation. Several countermeasures have been proposed to tackle this problem, from using hand-crafted features to using convolutional neural network. Some countermeasures use images as input and other leverage temporal information in videos. Their output could be binary (bona fide or fake) or muti-class (deepfake detection), or segmentation masks (manipulation localization). Since deepfake methods evolve rapidly, dealing with unseen ones is still a challenging problem. Some solutions have been proposed, however, this problem is not completely solved. In this talk, I will provide an overview on both deepfake generation and deepfake detection/localization. I will mainly focus on image and video domain and also introduce some audiovisual-based methods on both sides. Some open discussions and future directions are also included.
Deepfake detection is a critical and evolving field aimed at identifying and mitigating the risks associated with manipulated multimedia content created using artificial intelligence (AI) techniques. Deepfakes involve the use of advanced machine learning algorithms, particularly generative models like Generative Adversarial Networks (GANs), to create highly convincing fake videos, audio recordings, or images that can deceive viewers into believing they are genuine.
One prevalent approach to deepfake detection involves leveraging advancements in computer vision and pattern recognition. Researchers and developers employ sophisticated algorithms to analyze various visual and auditory cues that may indicate the presence of deepfake manipulation. For instance, anomalies in facial expressions, inconsistent lighting and shadows, or unnatural lip sync in videos can be indicative of deepfake content. Additionally, deepfake detectors may examine metadata, such as inconsistencies in timestamps or editing artifacts, to identify alterations in the content's authenticity.
Machine learning plays a central role in deepfake detection, with models being trained on diverse datasets that include both authentic and manipulated content. Supervised learning techniques involve training models on labeled datasets, enabling them to recognize patterns associated with deepfake manipulation. Researchers also explore unsupervised and semi-supervised learning methods, allowing detectors to identify anomalies without explicit labels for every training instance.
As the field progresses, deepfake detectors are increasingly adopting advanced neural network architectures to enhance their accuracy. Ensembling multiple models, each specialized in detecting specific types of manipulations, is another strategy employed to improve overall detection performance. Furthermore, the integration of explainable AI techniques enables better understanding of the detection process and provides insights into the features contributing to the decision-making process of the models.
Despite these advancements, deepfake detection remains a challenging task due to the constant evolution of deepfake generation techniques. Adversarial training, where detectors are trained on data that includes adversarial examples, is one method to improve robustness against sophisticated manipulation attempts. Continuous research efforts are required to stay ahead of emerging deepfake technologies and to develop detectors capable of identifying novel manipulation methods.
In conclusion, deepfake detection is a multidimensional challenge that requires a combination of computer vision, machine learning, and data analysis techniques. Researchers and practitioners are actively developing and refining methods to detect manipulated content by examining visual and auditory cues, leveraging machine learning models, and staying vigilant against evolving deepfake technologies. As the threat landscape evolves, ongoing innovati
The “deepfake” phenomenon — using machine learning to generate synthetic video, audio and text content — is an ominous example of how quickly new technologies can be diverted from their original purposes. Month by month, it is becoming easier and cheaper to create fakes that are increasingly difficult to distinguish from genuine artefacts.
DEEPFAKE DETECTION TECHNIQUES: A REVIEWvivatechijri
Noteworthy advancements in the field of deep learning have led to the rise of highly realistic AI generated fake videos, these videos are commonly known as Deepfakes. They refer to manipulated videos, that are generated by sophisticated AI, that yield formed videos and tones that seem to be original. Although this technology has numerous beneficial applications, there are also significant concerns about the disadvantages of the same. So there is a need to develop a system that would detect and mitigate the negative impact of these AI generated videos on society. The videos that get transferred through social media are of low quality, so the detection of such videos becomes difficult. Many researchers in the past have done analysis on Deepfake detection which were based on Machine Learning, Support Vector Machine and Deep Learning based techniques such as Convolution Neural Network with or without LSTM .This paper analyses various techniques that are used by several researchers to detect Deepfake videos.
IEEE EED2021 AI use cases in Computer VisionSAMeh Zaghloul
AI Use Cases in Computer Vision
Introduction and Overview about AI Use Cases in Computer Vision, to answer a basic question: “How Machines See?”, covering Neural Networks, Object detection and recognition, Content-based image retrieval, Object tracking, Image restoration, Scene reconstruction, Computer Vision Tools, Frameworks, Pretrained Models, and Public Train/Test Datasets.
With real-project examples on using Computer Vision in Egyptian Hieroglyph Alphabet recognition, Face Recognition/Matching, in addition to hands-on interactive session on Object/Image Tagging/Annotation on Videos/Images to prepare model training dataset.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/sep-2019-alliance-vitf-ucberkeley
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Shruti Agarwal, Ph.D. Candidate at U.C. Berkeley, delivers the presentation "Creating, Weaponizing,and Detecting Deep Fakes" at the Embedded Vision Alliance's September 2019 Vision Industry and Technology Forum. Agarwal explains how to use computer vision to detect "deepfakes."
A survey of deepfakes in terms of deep learning and multimedia forensicsIJECEIAES
Artificial intelligence techniques are reaching us in several forms, some of which are useful but can be exploited in a way that harms us. One of these forms is called deepfakes. Deepfakes is used to completely modify video (or image) content to display something that was not in it originally. The danger of deepfake technology impact on society through the loss of confidence in everything is published. Therefore, in this paper, we focus on deepfake detection technology from the view of two concepts which are deep learning and forensic tools. The purpose of this survey is to give the reader a deeper overview of i) the environment of deepfake creation and detection, ii) how deep learning and forensic tools contributed to the detection of deepfakes, and iii) finally how in the future incorporating both deep learning technology and tools for forensics can increase the efficiency of deepfakes detection.
Deepfakes refer to synthetic media created using advanced AI and ML techniques. What are its potential applications and implications for society at large?
Wee are discussing 20 best applications of deep learning with Python, that you must know. Let’s discuss them one by one:
i. Restoring Color in B&W Photos and Videos
With Deep Learning, it is possible to restore color in black and white photos and videos. This can give a new life to such media. The ACM Digital Library is one such project that colorizes grayscale images combining global priors and local image features. This is based on Convolutional Neural Networks.
The Deep Learning network learns patterns that naturally occur within photos. This includes blue skies, white and gray clouds, and the greens of grasses. It uses past experience to learn this. Although sometimes, it can make mistakes, it is efficient and accurate most of the times.
If you want to help or donate please donate at my paypal:
dyokimura@gmail.com
arts 10
SUPPORT ME:
https://www.buymeacoffee.com/dyokimura6
CHECK MY GAMING CHANNEL:
https://www.youtube.com/channel/UCoKOObshfyyxhVkw1VjyQNA
Fontys - Demystify AI. Wat is er mogelijk met AI en wat niet?BigDataExpo
Er is een explosie van toepassingen van Neural Nets en Deep learning. Wat kunnen deze wel en wat kunnen ze niet. Wat kan deze ontwikkeling voor U betekenen?
What is Deepfake AI? How it works and How Dangerous Are They?janviverma11
It combines "deep learning" and "fake" to describe both the technology and the misleading content it produces. Deepfake can replace one person with another in existing content or generate entirely new content where people seem to do or say things they never did. The most significant risk of deep fakes lies in their potential to spread false information that seems true.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
Unleashing the Power of Data_ Choosing a Trusted Analytics Platform.pdfEnterprise Wired
In this guide, we'll explore the key considerations and features to look for when choosing a Trusted analytics platform that meets your organization's needs and delivers actionable intelligence you can trust.
State of Artificial intelligence Report 2023kuntobimo2016
Artificial intelligence (AI) is a multidisciplinary field of science and engineering whose goal is to create intelligent machines.
We believe that AI will be a force multiplier on technological progress in our increasingly digital, data-driven world. This is because everything around us today, ranging from culture to consumer products, is a product of intelligence.
The State of AI Report is now in its sixth year. Consider this report as a compilation of the most interesting things we’ve seen with a goal of triggering an informed conversation about the state of AI and its implication for the future.
We consider the following key dimensions in our report:
Research: Technology breakthroughs and their capabilities.
Industry: Areas of commercial application for AI and its business impact.
Politics: Regulation of AI, its economic implications and the evolving geopolitics of AI.
Safety: Identifying and mitigating catastrophic risks that highly-capable future AI systems could pose to us.
Predictions: What we believe will happen in the next 12 months and a 2022 performance review to keep us honest.
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
Learn SQL from basic queries to Advance queriesmanishkhaire30
Dive into the world of data analysis with our comprehensive guide on mastering SQL! This presentation offers a practical approach to learning SQL, focusing on real-world applications and hands-on practice. Whether you're a beginner or looking to sharpen your skills, this guide provides the tools you need to extract, analyze, and interpret data effectively.
Key Highlights:
Foundations of SQL: Understand the basics of SQL, including data retrieval, filtering, and aggregation.
Advanced Queries: Learn to craft complex queries to uncover deep insights from your data.
Data Trends and Patterns: Discover how to identify and interpret trends and patterns in your datasets.
Practical Examples: Follow step-by-step examples to apply SQL techniques in real-world scenarios.
Actionable Insights: Gain the skills to derive actionable insights that drive informed decision-making.
Join us on this journey to enhance your data analysis capabilities and unlock the full potential of SQL. Perfect for data enthusiasts, analysts, and anyone eager to harness the power of data!
#DataAnalysis #SQL #LearningSQL #DataInsights #DataScience #Analytics
4. What are Deep Fakes
• Deepfakes are a form of synthetic media that use artificial
intelligence and machine learning algorithms to create fake
images, videos, or audio recordings that appear to be real. They
are created by manipulating or combining existing content to
produce a realistic result.
• The term "deepfake" comes from the underlying technology
such as deep learning algorithms which teach themselves to
solve problems with large sets of data and can be used to create
fake content of real people.
5. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
19th century
Photo
manipulation was
developed
6. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
7. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
8. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
20th century
Technology steadily
improved.
9. History of Deep Fake
1990 2014 2017 2018 2019 2020 2021 2022
19th
century
20th
century
10. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
11. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
1990
Deepfake technology
has been developed
by researchers at
academic institutions
Technology steadily
improved.
12. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
13. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
14. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
2014
The first academic
research paper on
deepfake tech.
15. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
16. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
17. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
2017
Deepfakes gained
widespread
attention on Reddit.
18. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
19. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
20. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
2018
The term "deepfake"
became widely used after a
user named "Deepfakes"
posted several videos
featuring the faces of
celebrities.
21. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
22. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
23. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
2018
In the same year,
researchers and tech
companies started
developing tools to detect
deepfake content.
24. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
25. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
26. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
2019
Advanced, with new
techniques for manipulating
not only faces but also voices
and even entire bodies.
In the same year, the US
Congress held hearings on
deepfakes.
27. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
28. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
29. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
2020
Deepfake technology
continued to evolve.
30. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
31. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
32. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
2021
Deepfakes remained a
concern, with several
incidents of deepfake
videos being used to
spread disinformation.
33. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
34. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
35. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
2022
Lots of open-source
software are
developed to create
deep fake easily
36. History of Deep Fake
19th
century
20th
century 1990 2014 2017 2018 2019 2020 2021 2022
37. Point to be Noted
1.How Is A Deepfake Different From
Photoshop or Face swap?
Fake images show up all over the internet these days and are
often harmless. You’re probably familiar with the amusing
effects of “face swapping” on snapchat or other photo apps,
where you can put someone else’s face on your own and
vice versa. Or maybe you participated in the “age yourself”
trend and ran your face through a fake aging app that
showed you what you might look like in your ripe old age.
38. Point to be Noted
2. So what’s the big deal?
In today’s society, the vast majority of people get their
information about the world and formulate opinions based
on content from the internet. Therefore, anyone with the
capability to create deepfakes can release misinformation
and influence the masses to behave in a way that will
advance the faker’s personal agenda in some way. Deepfake-
based misinformation could produce damage on a micro and
macro scale.
39. Deepfakes Matter Because
Believability
If we see and hear something with our own eyes and ears,
we believe it to exist or to be true, even if it is unlikely.
The brain's visual system can be targeted for misperception,
in the same way optical illusions trick our brains.
40. Deepfakes Matter Because
Accessibility
The technology of today and tomorrow, will allow all of us to
create fakes that appear real, without a significant
investment in training, data collection, hardware and
software.
However, the accessibility of deepfake technology also
means that it can be used for both good and bad purposes.
41. How Deep Fake Works
• The main technology for creating deepfakes is deep learning,
a machine learning method used to train deep neural
networks (DNNs).
• DNNs consist of a large set of interconnected artificial
neurons, referred to as units.
• Much like neurons in the brain, while each unit itself
performs a rather simple computation, all units together can
perform complex nonlinear operations.
• In case of deepfakes, this is mapping from an image of one
person to another.
42. How Deep Fake Works
• Deepfakes are commonly created using a specific deep
network architecture known as autoencoder.
• Autoencoders are trained to recognize key characteristics of
an input image to subsequently recreate it as their output. In
this process, the network performs heavy data compression.
• Autoencoders consist of three subparts:
• an encoder (recognizing key features of an input face)
• a latent space (representing the face as a compressed
version)
• a decoder (reconstructing the input image with all
detail)
43. How Deep Fake Works
Autoencoder: a DNN architecture commonly used for generating deepfakes.
44. How Deep Fake Works
Encoder
• Much like an artist drawing an image, the encoder
compresses an image, from originally tens of
thousands of pixels into a few hundred (typically
around 300) measurements.
• These measurements can relate to particular facial
characteristics e.g., eyes are open or closed, the
head pose, the emotional expression, skin color,
etc.
45. How Deep Fake Works
Latent space
• Represents different facial aspects of the person on
which it is trained.
• It is often compared to information bottlenecks so that
the network can learn more general facial
characteristics rather than memorizing all input
examples of specific people.
• The compression achieved by the encoding of an input
image into the latent space can be 0.1% of the memory
needed to store the original input image.
46. How Deep Fake Works
Decoder
• Decompresses the information in the latent space to
reconstruct an image as perfectly as possible.
• The performance of the whole autoencoder network
is measured by how much the input and generated
(output) images resemble each other. This task is
made difficult because of the heavy data
compression performed by the encoder.
47. How Deep Fake Works
The Deep Fake Trick
• Two separate autoencoders trained each on a different
person will be very different and cannot be integrated.
• The trick for creating deepfakes lies in sharing the
encoder across two networks such that they remain
compatible.
• This way, the image of one person can be used to
compute a compressed latent space representation,
from where the decoder of another person is used to
create the fake.
48. How Deep Fake Works
The Deep Fake Trick
• Using the same encoder and hence latent space representation for images of two separate people is
key to understanding deepfakes.
• If two autoencoders were trained separately, the latent spaces would not be aligned (Brie, and Carrey
latent space below). Encoder sharing will result in an aligned latent space (grey dots). The
autoencoders can then be used to match from one to another person.
49. How Deep Fake Works
A shared encoder is key to creating novel facial images of a target person that exhibit the same
emotional expression, head posture, etc. as the original facial characteristics. This new image can then be
doctored back into the original image to create a fake scene.
50. How Deep Fake Works
Additional Technology required to develop Deep
Fakes
• GAN neural network technology is used in the development of
all deepfake content, using generator and discriminator
algorithms.
• Convolutional neural networks analyze patterns in visual
data. CNNs are used for facial recognition and movement
tracking.
• Natural language processing is used to create deepfake
audio. NLP algorithms analyze the attributes of a target's
speech and then generate original text using those attributes.
• High-performance computing is a type of computing that
provides the significant necessary computing power required
by deepfakes.
51. Applications of Deep Fake
Entertainment
Deepfakes can be used to
create entertaining
content, such as videos or
images that place a
person in a different time
or setting. For example,
deepfakes have been
used to create mashups
of celebrities in movies or
TV shows that they never
actually appeared in.
52. Applications of Deep Fake
Entertainment
Deepfakes can be used to
create entertaining
content, such as videos or
images that place a
person in a different time
or setting. For example,
deepfakes have been
used to create mashups
of celebrities in movies or
TV shows that they never
actually appeared in.
Advertising and
Marketing
Deepfakes can be used
to create compelling
marketing content or
advertisements. For
example, a company
could create a deepfake
of a celebrity endorsing
their product, without
actually having to pay
the celebrity for their
endorsement.
53. Applications of Deep Fake
Entertainment
Deepfakes can be used to
create entertaining
content, such as videos or
images that place a
person in a different time
or setting. For example,
deepfakes have been
used to create mashups
of celebrities in movies or
TV shows that they never
actually appeared in.
Advertising and
Marketing
Deepfakes can be used
to create compelling
marketing content or
advertisements. For
example, a company
could create a deepfake
of a celebrity endorsing
their product, without
actually having to pay
the celebrity for their
endorsement.
Education and
Research
Deepfakes can be used
to create realistic
simulations for
educational or research
purposes. For example,
they could be used to
simulate historical events
or scientific phenomena.
54. Applications of Deep Fake
Entertainment
Deepfakes can be used to
create entertaining
content, such as videos or
images that place a
person in a different time
or setting. For example,
deepfakes have been
used to create mashups
of celebrities in movies or
TV shows that they never
actually appeared in.
Advertising and
Marketing
Deepfakes can be used
to create compelling
marketing content or
advertisements. For
example, a company
could create a deepfake
of a celebrity endorsing
their product, without
actually having to pay
the celebrity for their
endorsement.
Education and
Research
Deepfakes can be used
to create realistic
simulations for
educational or research
purposes. For example,
they could be used to
simulate historical events
or scientific phenomena.
Accessibility
Deepfakes can be used
to improve accessibility
for people with
disabilities. For example,
they could be used to
create sign language
interpretation of spoken
language or to create
more natural-sounding
synthesized speech for
people who use assistive
technology.
55. Companies using Deep Fake
Cognitivescale
This company uses deep
learning to create virtual
assistants and chatbots
that can interact with
customers in a more
natural and human-like
way.
56. Companies using Deep Fake
Cognitivescale
This company uses deep
learning to create virtual
assistants and chatbots
that can interact with
customers in a more
natural and human-like
way.
Modulate
Modulate is a company
that uses deep learning
to create synthetic voices
that can be used in
gaming and virtual reality
applications. Their
technology can also be
used to create more
natural-sounding voice
assistants
57. Companies using Deep Fake
Cognitivescale
This company uses deep
learning to create virtual
assistants and chatbots
that can interact with
customers in a more
natural and human-like
way.
Modulate
Modulate is a company
that uses deep learning
to create synthetic voices
that can be used in
gaming and virtual reality
applications. Their
technology can also be
used to create more
natural-sounding voice
assistants
Synthesia
Synthesia uses deep
learning to create videos
of people speaking
different languages or
with different accents.
Their technology can be
used to create more
engaging language
learning content or to
improve accessibility for
people with hearing
impairments.
58. Companies using Deep Fake
Cognitivescale
This company uses deep
learning to create virtual
assistants and chatbots
that can interact with
customers in a more
natural and human-like
way.
Modulate
Modulate is a company
that uses deep learning
to create synthetic voices
that can be used in
gaming and virtual reality
applications. Their
technology can also be
used to create more
natural-sounding voice
assistants
Synthesia
Synthesia uses deep
learning to create videos
of people speaking
different languages or
with different accents.
Their technology can be
used to create more
engaging language
learning content or to
improve accessibility for
people with hearing
impairments.
Pinch of AI
This company uses deep
learning to create
personalized recipes
based on users' dietary
preferences and
restrictions. Their
technology can also be
used to analyze food
images and recommend
recipes based on the
ingredients.
59. Dark Side of Deep Fake
Why are they a problem?
• While deepfakes do have a number of beneficial
uses, including political satire, comedy,
entertainment, and education, a number of its
associated dangers are severe, even existential
threats. Since their introduction, deepfake
technology has been utilized extensively to Spread
misinformation and inspire misunderstanding, fear
or disgust. Create false narratives of people.
• The potential for disinformation is the biggest
concern with Deep Fake. It can be used to create
fake news, propaganda, and even blackmail. It can
also be used to manipulate public opinion,
influence elections, and damage reputations.
60. Dark Side of Deep Fake
Political Manipulation: Deepfakes can be used to manipulate political opinions or elections. For example, a
deepfake video of a political candidate saying something controversial or offensive could be used to sway public
opinion or damage their reputation.
Fraud: Deepfakes can be used for financial fraud or extortion. For example, a deepfake video of a CEO could be
created to instruct an employee to transfer funds to a fraudulent account.
Blackmail: Deepfakes can be used for blackmail. For example, a deepfake video of a person could be created without
their consent and used to extort money or to damage their reputation.
Cyberbullying: Deepfakes can be used for cyberbullying or harassment. For example, a deepfake video or image of a
person could be created to embarrass or humiliate them.
Disinformation: Deepfakes can be used to spread false information or propaganda. For example, a deepfake video
could be created to make it appear as though a public figure said or did something that they did not, which could
damage their reputation or influence public opinion.
61. Real Life Incident
Facebook founder Mark Zuckerberg was the victim of a deepfake that showed him
boasting about how Facebook "owns" its users. The video was designed to show how
people can use social media platforms such as Facebook to deceive the public.
U.S. President Joe Biden was the victim of numerous deepfakes in 2020 showing him in
exaggerated states of cognitive decline meant to influence the presidential election.
Presidents Barack Obama and Donald Trump have also been victims of deepfake
videos, some to spread disinformation and some as satire and entertainment.
During the Russian invasion of Ukraine in 2022, Ukrainian President Volodomyr
Zelenskyy was portrayed telling his troops to surrender to the Russians.
In 2019, a deepfake video of the Speaker of the US House of Representatives, Nancy
Pelosi, was circulated on social media. The video was manipulated to make it appear as
though Pelosi was slurring her words and drunk, which was used to attack her
credibility and reputation.
62. What Can We Do?
• As of now, deepfakes aren’t a huge problem, but they’ll likely
increase in prevalence and quality over the next few years.
That doesn’t mean you can’t trust any image or video, but you
should begin to train yourself to be more aware of fake images
and videos, especially when the videos are asking you to send
money or personal information, or making outrageous claims
that seem unusual for the person who appears to be making
them.
•
• Interestingly, AI may be the answer to detecting deep fakes.
Models can be trained to recognize fake images on dimensions
that the human eye can’t detect. Keep a watchful eye on the
development of the deepfake phenomenon over the next
couple of years, and, as always, remain vigilant
63. Detection of Deep Fake
• Unusual or awkward facial positioning.
• Unnatural facial or body movement.
• Unnatural coloring.
• Videos that look odd when zoomed in or magnified.
• Inconsistent audio.
• People that don't blink.
• Misspellings.
• Sentences that don't flow naturally.
• Suspicious source email addresses.
• Phrasing that doesn't match the supposed sender.
• Out-of-context messages that aren't relevant to any discussion,
event or issue.
64. Prevention of Deep Fake
Companies, organizations and government agencies are developing technology to identify
and block deepfakes. Some social media companies use blockchain technology to verify the
source of videos and images before allowing them onto their platforms.
Deepfake protection software is available from the following companies:
• Adobe has a system that lets creators attach a signature to videos and photos with
details about their creation.
• Microsoft has AI-powered deepfake detection software that analyze videos and photos
to provide a confidence score that shows whether the media has been manipulated.
• Operation Minerva uses catalogs of previously discovered deepfakes to tell if a new
video is simply a modification of an existing fake that has been discovered and given a
digital fingerprint.
• Sensity offers a detection platform that uses deep learning to spot indications of
deepfake media in the same way antimalware tools look for virus and malware
signatures. Users are alerted via email when they view a deepfake.
65. Are Deep Fakes legal?
Deepfakes are generally legal, and there is little law enforcement can
do about them, despite the serious threats they pose. Deepfakes are
only illegal if they violate existing laws such as defamation or hate
speech.
The lack of laws against deepfakes is because most people are
unaware of the new technology, its uses and dangers. Because of this,
victims don't get protection under the law in most cases of deepfakes.
66. Deep Fakes Statistics
How Many People Know What a Deepfake Is?
Globally, 71% of respondents say that they do not know what a deepfake is. Just under a third of global consumers
say they are aware of deepfakes
67. Deep Fakes Statistics
How Many People Think They Could Spot a Deepfake?
57% of global respondents say they think they could spot a deepfake. 43% admit they would not be able to tell the
difference between a real video and deepfake.
68. Deep Fakes Statistics
What Do People Think About Deepfakes?
• People are most concerned that deepfakes “could make it hard for us to trust what we see online”.
• The runner up was that “deepfakes are dangerous” – 62% of global respondents agree.
• 58% also agree that deepfakes are a “growing concern”.
69. Conclusion
• In conclusion, deepfakes have the potential to significantly impact society in both positive and
negative ways. While they can be used for creative and entertainment purposes, the malicious use of
deepfakes can result in the spread of misinformation, defamation, and other forms of harm. The
increasing accessibility and sophistication of deepfake technology has made it difficult to detect and
prevent deepfakes, but efforts are being made to develop effective solutions.
• To address the challenges posed by deepfakes, it is important for individuals, organizations, and
governments to work together to raise awareness about the risks and potential harms associated
with deepfakes, and to develop and implement effective detection and prevention strategies. By
doing so, we can help ensure that the benefits of deepfake technology are realized while minimizing
the risks and potential negative impacts.
Check out this Website
https://this-person-does-not-exist.com/en