Identifying AI-Generated Misinformation &
Deepfakes
Martin Mawut
Fact-Checker & Social Media Manager,
The Claritydesk
04th, February 2026
Introduction
● A computer science & IT graduate from the University of
Juba.
● Working as a fact-checker & social media manager for
Claritydesk.
● Completed advanced cybersecurity internship with Al-Nafi
2025
● Worked as an IT assistant intern at Alpha Bank South Sudan
2024 - 2025.
● Completed a course on Digital Security and Safe Online
Practices for Human Rights Defenders & Fact-Checking with
Code for Africa 2025.
● And a cybersecurity smart fellowship with SafetyComm
South Sudan. 2024.
Martin Mawut
Fact-Checker
The ClarityDesk
+211928450245
martin@claritydesk.org
Contents
● AI Concepts.
● Types of AI (Machine Learning,
Deep Learning, Generative AI)
● How AI works in content
creation.
● Roles of AI in spreading
mis/disinformation.
Session Objectives
● Understand AI misinformation & deepfakes - what they are,
how they’re made, and why they’re spreading.
● Identify fake vs real content – spot red flags in AI-generated
text, images, audio, and video.
● Recognize risks – understand social, political, and security
impacts of AI misinformation.
● Verify digital content – use fact-checking, reverse searches,
metadata, and trusted tools.
AI Concepts
● Artificial Intelligence (AI) is technology that allows machines to learn, think,
solve problems, understand language, recognize images, and make
decisions.
● This presentation explores how AI plays a role in spreading misinformation,
its impacts, and possible solutions.
AI doesn’t refer to one thing
● Large Language Models. E.g., ChatGPT, DeepSeek, and Google Gemini are
trained on datasets.
● Generative Adversarial Networks (GANs). Powering Image generation
● Machine Learning. Training Machines to imitate humans
● Deepfakes: fake images, videos, or audio recordings created using AI.
Deep - Deep learning (AI technology) Fake – False or manipulated content
Random Face Generator
link
Machine Learning
AI Chatbots
link
● Face Swap: Replacing your face with someone else’s.
● Voice cloning: use of a subject’s voice to develop a synthetic voice.
● Puppets: to make someone act, speak, or decide according to your
instructions.
● Text generation: Using AI to generate textual content
● Lips sync: modifying audios and videos so that the subject says things in their
own voice with the right tone.
● Image generation/synthesis: producing an image of an object or a person
that is not real.
Types of Deep Fakes
Impersonation and dis/misinformation
● Publish hoaxes & misinformation
● Falsify the identity of a celebrity or high
government official.
● Falsify and impersonate customers & users in
order to commit fraud.
● Create false online identity & profiles
Deep Fakes Harms
Fact-checking AI-generated content
There are good tools but first analyse it yourself.
Michelle Obama Donald Trump in prison
Debunking AI Content
● Critical Analysis: Zoom into the image for typical AI mistakes, e.g.,
mismatch earring, deformed fingers, legs, and backgrounds.
● Reverse Image Search: Use fact-checking tools to gather truthful evidence.
● Use AI tools like IsitAIornot and Hugging Face to determine whether AI has
been involved.
● Watch out for the aesthetic sheen. E.g., unrealistically smooth skin (no
pores, scars, or wrinkles)
Tools to Use.
Isitaiornot
● Upload the image onto the
website and analyse it.
● The tool will then analyse it tell
whether it is AI-generated or
human.
link
link
Tools to Use.
● Upload the image onto the
website and analyse it.
● The tool will then analyse it and
tell whether it is AI-generated or
human.
● No sign-up required
AiImageChecker
Tools to Use.
● Upload the image into the website and analyse it.
● The tool will then analyse it tell whether it is an AI
generated or human.
● No sign up required.
Hive
link
Is AI generated or Not?
Summary
● Do your own analysis; don’t just rely on tools.
● Apply critical thinking. Is it the same with what you know about the subject?
Again, Use Tools.
1. Accuracy and Reliability: Outputs
must not be treated as final truths. Always
cross-check results with trusted human
sources.
2.Transparency: Be clear when AI tools
are used. Disclose which AI tools or
methods were used to verify content.
3. Accountability: Humans, not AI, must
be responsible for final decisions.
AI Ethics Principles
4. Bias Awareness and Fairness:
Regularly evaluate AI tools for bias and
unfair outcomes.
AI may unfairly flag true content as false
due to limited training data.
5. Privacy and Data Protection:
Respect data protection laws and ethical
data handling standards. Verification
should not violate human rights.
Q&A and Closing Remarks
Thank you for your time!
Comments, Observations, Objections
Connect with me.
https://medium.com/@mawutmartin
www.linkedin.com/in/martinmawut
Identifying AI-Generated Misinformation & Deepfakes
Martin Mawut
Fact-Checker | The ClarityDesk
martin@claritydesk.org | +211910706303

Identifying AI Generated Misinformation & Deepfakes

  • 1.
    Identifying AI-Generated Misinformation& Deepfakes Martin Mawut Fact-Checker & Social Media Manager, The Claritydesk 04th, February 2026
  • 2.
    Introduction ● A computerscience & IT graduate from the University of Juba. ● Working as a fact-checker & social media manager for Claritydesk. ● Completed advanced cybersecurity internship with Al-Nafi 2025 ● Worked as an IT assistant intern at Alpha Bank South Sudan 2024 - 2025. ● Completed a course on Digital Security and Safe Online Practices for Human Rights Defenders & Fact-Checking with Code for Africa 2025. ● And a cybersecurity smart fellowship with SafetyComm South Sudan. 2024. Martin Mawut Fact-Checker The ClarityDesk +211928450245 martin@claritydesk.org
  • 3.
    Contents ● AI Concepts. ●Types of AI (Machine Learning, Deep Learning, Generative AI) ● How AI works in content creation. ● Roles of AI in spreading mis/disinformation.
  • 4.
    Session Objectives ● UnderstandAI misinformation & deepfakes - what they are, how they’re made, and why they’re spreading. ● Identify fake vs real content – spot red flags in AI-generated text, images, audio, and video. ● Recognize risks – understand social, political, and security impacts of AI misinformation. ● Verify digital content – use fact-checking, reverse searches, metadata, and trusted tools.
  • 5.
    AI Concepts ● ArtificialIntelligence (AI) is technology that allows machines to learn, think, solve problems, understand language, recognize images, and make decisions. ● This presentation explores how AI plays a role in spreading misinformation, its impacts, and possible solutions.
  • 6.
    AI doesn’t referto one thing ● Large Language Models. E.g., ChatGPT, DeepSeek, and Google Gemini are trained on datasets. ● Generative Adversarial Networks (GANs). Powering Image generation ● Machine Learning. Training Machines to imitate humans ● Deepfakes: fake images, videos, or audio recordings created using AI. Deep - Deep learning (AI technology) Fake – False or manipulated content
  • 7.
  • 8.
  • 9.
  • 10.
    ● Face Swap:Replacing your face with someone else’s. ● Voice cloning: use of a subject’s voice to develop a synthetic voice. ● Puppets: to make someone act, speak, or decide according to your instructions. ● Text generation: Using AI to generate textual content ● Lips sync: modifying audios and videos so that the subject says things in their own voice with the right tone. ● Image generation/synthesis: producing an image of an object or a person that is not real. Types of Deep Fakes
  • 12.
    Impersonation and dis/misinformation ●Publish hoaxes & misinformation ● Falsify the identity of a celebrity or high government official. ● Falsify and impersonate customers & users in order to commit fraud. ● Create false online identity & profiles Deep Fakes Harms
  • 13.
    Fact-checking AI-generated content Thereare good tools but first analyse it yourself. Michelle Obama Donald Trump in prison
  • 14.
    Debunking AI Content ●Critical Analysis: Zoom into the image for typical AI mistakes, e.g., mismatch earring, deformed fingers, legs, and backgrounds. ● Reverse Image Search: Use fact-checking tools to gather truthful evidence. ● Use AI tools like IsitAIornot and Hugging Face to determine whether AI has been involved. ● Watch out for the aesthetic sheen. E.g., unrealistically smooth skin (no pores, scars, or wrinkles)
  • 15.
    Tools to Use. Isitaiornot ●Upload the image onto the website and analyse it. ● The tool will then analyse it tell whether it is AI-generated or human. link
  • 16.
    link Tools to Use. ●Upload the image onto the website and analyse it. ● The tool will then analyse it and tell whether it is AI-generated or human. ● No sign-up required AiImageChecker
  • 17.
    Tools to Use. ●Upload the image into the website and analyse it. ● The tool will then analyse it tell whether it is an AI generated or human. ● No sign up required. Hive link
  • 18.
  • 19.
    Summary ● Do yourown analysis; don’t just rely on tools. ● Apply critical thinking. Is it the same with what you know about the subject? Again, Use Tools.
  • 20.
    1. Accuracy andReliability: Outputs must not be treated as final truths. Always cross-check results with trusted human sources. 2.Transparency: Be clear when AI tools are used. Disclose which AI tools or methods were used to verify content. 3. Accountability: Humans, not AI, must be responsible for final decisions. AI Ethics Principles 4. Bias Awareness and Fairness: Regularly evaluate AI tools for bias and unfair outcomes. AI may unfairly flag true content as false due to limited training data. 5. Privacy and Data Protection: Respect data protection laws and ethical data handling standards. Verification should not violate human rights.
  • 21.
    Q&A and ClosingRemarks Thank you for your time! Comments, Observations, Objections Connect with me. https://medium.com/@mawutmartin www.linkedin.com/in/martinmawut
  • 22.
    Identifying AI-Generated Misinformation& Deepfakes Martin Mawut Fact-Checker | The ClarityDesk martin@claritydesk.org | +211910706303