SlideShare a Scribd company logo
1 of 53
Download to read offline
This project has received funding from the European Union
DIGITAL-2021-TRUST-01. Contract number: 101083756
meddmo.eu
Disinformation
challenges, tools and techniques to deal or live with it
Nikos Sarris
Media analysis, veri
fi
cation and retrieval group (MeVer)
Center for Research and Technology Hellas (CERTH)
Body Level One
Body Level Two
Body Level
Three
Body Level
Four
Body
Level
Five
What happens is posted,
but has what’s posted really happened?
huge masses of information are produced
by huge numbers of sources and delivered to huge audiences.
Some is bound to be fake...
the
challenge
How can we help people
distinguish truth from lies?
Disinformation
Misinformation
Malinformation
Information that is false and deliberately
created to harm a person, social group,
organization or country.
Information that is false, but not created with
the intention of causing harm.
Information that is based on reality, used to
inflict harm on a person, organization or
country.
INFORMATION DISORDER: Toward an interdisciplinary framework for research and policy making
Claire Wardle, Hossein Derakhshan
Council of Europe report DGI(2017)09
…as old as time
A challenge…
high cost in both cases
{disinformation}
{misinformation}
much harder today
•the rapid growth of social media and the Internet has made it easier for false
information to spread quickly and widely
•people are often more likely to share or believe false information that confirms
their preexisting beliefs or opinions (confirmation bias)
•the rise of generative AI (e.g. deepfakes, ChatGPT) is making it increasingly
difficult to distinguish between real and fake
•political polarisation and the erosion of trust in traditional media sources has
created an environment in which disinformation can thrive
•disinformation campaigns are often carried out by governments, political
organisations, and other powerful groups with an agenda to manipulate public
opinion and undermine democratic processes
the role of social media
•Amplification: algorithms, that actually reward users for sharing content, can
amplify false information, making it more visible and easier to spread.
•Speed of dissemination: false information to spread quickly and widely
•Anonymity: sources can spread false information anonymously, making it
harder to hold them accountable
•Viral Sensationalism: False information that is sensational or attention-
grabbing is more likely to go viral
•Filter bubbles: users are commonly exposed to information that aligns with
their existing beliefs and opinions, making it harder to challenge false
information (debatable)
comes in many different shapes
•Misleading headlines attract clicks and generate traffic, even if the content
of the article doesn't support the headline
•Hoaxes and conspiracies: Spread of false information with the intent to
deceive, often involving politically motivated or sensational claims.
•Fabricated stories with the intention of misleading people.
•Satire and parody: although created for satirical purposes can be
misinterpreted
•Political propaganda: to promote a political agenda or discredit opponents.
•Manipulated media: more attractive, persuasive and tempting to share
getting harder as we speak
AI synthetic generation of text and visuals
(combined?) is pushing the challenge to a next level
Image synthetically generated by DALLE-2
Most text of this presentation up to this slide was generated by ChatGPT
the impact can be very high
•Misleading the public: the spread false or misleading information can
impact individuals' decision-making and beliefs.
•Undermining democracy: false or misleading information can be used to
manipulate elections and undermine trust in democratic institutions
•Exacerbating conflicts: disinformation can fuel social and political
tensions, leading to increased conflict and violence
•Damaging reputations: the distribution of various rumors and claims is
frequently used to harm the reputation of individuals and organizations
•Impairing public health: conspiracy theories and dangerous
misinformation, like non-scientifically based medical advice about
causes, symptoms, and treatments
deal with
detect
model
How can we…
… (dis)information
Modeling
(dis)information
How can we decompose information into measurable
signals of credibility
Contributor
Content
Context
What can we find about the source of information?
Does everything contextualise together?
Does the posted content look reliable?
The CCC model
Contributor:
Who says?
1
2
3
4
5
Reputation: what do people think of this source?
History: what is the past activity of this source?
Presence: where does this source exist?
Influence: what happens because of this source?
Popularity: who follows this source?
Content:
Sounds real?
1
2
3
4
5
Quality: what is the text/visual/audio style like?
Popularity: what is the social interaction with it?
Authenticity: has it been manipulated/
synthetically generated?
History: can it be found in past publications?
Reputation: how is it referenced by others?
Context:
Does it stick
together?
1
2
3
4
5
Cross-check: are there any similar reports?
Diversity: are there multiple coherent reports?
Provenance: how has this travelled through time?
Influence: what happens because of this?
Proximity: do source locations relate to events?
The ΑΜΙ model
INFORMATION DISORDER: Toward an interdisciplinary framework for research and policy making
Claire Wardle, Hossein Derakhshan
Council of Europe report DGI(2017)09
The DACTA model
Detecting, Assessing, Countering Training & raising Awareness
The BLOC model
Nwala, A.C., Flammini, A. & Menczer, F. A language framework for modeling social media account behavior. EPJ Data Sci. 12, 33 (2023).
https://doi.org/10.1140/epjds/s13688-023-00410-9
A language framework for modelling social media account behaviours
consisting of symbols representing user actions and content
Action alphabet
T:: Post message
P:: Reply to friend
p:: Reply to non-friend
π:: Reply to own post
R:: Reshare friend’s post
r:: Reshare non-friend’s post
ρ:: Reshare own post
Content alphabet
t:: Text
H:: Hashtag
M:: Mention of friend
m:: Mention of non-friend
q:: Quote of other’s post
ϕ:: Quote of own post
E:: Media object
U:: link (URL)
Modeling
(dis)information
Various models have been proposed
The signals of credibility are many
The most signals are addressed the better the assessment
A wholistic view appears to be more constructive than a unit
Detecting
(dis)information
Some tools that can help measure signals of credibility
Assessment of
each credibility
signal
Overall assessment
of post credibility
Overall
assessment of
each pillar of the
model
Assessment of trustworthiness
Rule-based assessment of the credibility signals following the CCC model
Further information
provided for each
pillar of the model
Evolution towards transparency
Model evolved to more intuitive pillars:
Activity - Network - Influence
Automatic assessment was removed
A lot more data was added to aid the users
into making their own assessments
Visual heatmaps
help users quickly
understand the
results
15 image analysis
algorithms help
uncover possible
forgeries
Heatmap overlay
on image helps
users spot
manipulated areas
Image Verification Assistant
Quickly analyse images to identify possible forgeries
Manual annotation
space helps users
discuss findings
Temporal video
segmentation
illustrating the
manipulation
probability of every
segment
Top level analysis
for the entire video
Image/video
player window for
detailed viewing
Deepfake detection
Assess the possibility of deepfake manipulation in image or video
Detailed analysis
per video segment
Example images
from the most likely
location illustrating
visual similarity
with the query
image
Map illustrating the
most likely location
for the image
Window for
detailed image
viewing
Location estimation
Estimate the most likely location for an image based on visual cues
Objects and actions
are automatically
identified and
added as tags
Automatic image
analysis creates
short descriptions
Assets are
automatically
classified to
specific categories,
as disturbing,
NSFW or memes
Multimedia Archive
A platform where you can easily analyse, find and annotate media
Any tag can be
manually added by
users
Results include
images with flags
of visual similarity
to the query
window
Window for
selecting part of
image that search
should be based
on
Search by visual similarity
Find images that are visually and semantically similar
Selecting to search
for similar images
Results include
images that are
almost (visually)
identical to the
query image
Search by visual similarity
Find images that are visually nearly identical
Selecting to search
for near duplicate
images
Results include
video that includes
scenes that are
visually identical to
scenes from the
query video
Search by visual similarity (video)
Find videos that contain common scenes
Selecting to search
for near duplicate
video
Frames from the
retrieved video
Frames from the
query video
Checks Google and
Wolfram
Knowledge Bases
to return relevant
results
Fact-check a claim automatically
Automatic fact-checking by Claimbuster
Returns relevant
fact-checks using
the Google Fact-
Check Explorer
Converts audio
from an event into
text and then
searches for
matches among the
previously
published fact-
checks in the
ClaimReview
database
Fact-check a live claim automatically
Live, automated fact-checking during political events by Squash
Relevant matches
are chosen by
human editors
and displayed on
users’ screens
within seconds of
the politician
making the claim
Monitors Twitter
accounts and alerts
when a new tweet
is classified as
check-worthy
Spot check-worthy posts
Automatically notifying on check-worthy posts by ClaimHunter
Beltrán, Javier et al.
“ClaimHunter: An
Unattended Tool for
Automated Claim
Detection on
Twitter.”
KnOD@WWW
(2021).
Collaborative verification
• Provides ways to easily discover
content from various sources
• Organises information in real-time
shareable views
• Provides access to internal and
external archives
• Allows use of many 3rd party tools
• Easily exports findings to share or
publish stories
• Used in EDMO and hubs to easily
share their findings
With fact-check retrieval features
Detecting
(dis)information
Various tools are available but none is the silver bullet
Many more can be built, but still not the silver bullet
Combining the findings from many tools is important
Collaboration can help, but is hard to achieve
Dealing with
(dis)information
Can we eradicate all information that is false?
EU action plan
•Increase transparency of online political advertising
•Improve resilience of citizens against disinformation
•Enforce accountability of online platforms and other actors for
the spread of disinformation
•Enhance cooperation between EU Member States, online
platforms, and other stakeholders
•Protect the confidentiality of journalists' sources and the
freedom of the press
https://digital-strategy.ec.europa.eu/en/library/action-plan-against-disinformation
Collaboration - EDMO and 14 hubs
Fact checking - 97 initiatives
Fact checking - IFCN signatories
Fact checking - EFCSN recently launched
Why don’t we just ask ChatGPT?
Matthew R. DeVerna†, Harry Yaojun Yan, Kai-Cheng Yang, Filippo Menczer, Artificial intelligence is ineffective and potentially harmful for fact checking
Observatory on Social Media, Indiana University, https://doi.org/10.48550/arXiv.2308.10800
Why don’t we just ask ChatGPT?
Matthew R. DeVerna†, Harry Yaojun Yan, Kai-Cheng Yang, Filippo Menczer, Artificial intelligence is ineffective and potentially harmful for fact checking
Observatory on Social Media, Indiana University, https://doi.org/10.48550/arXiv.2308.10800
does not significantly
affect participants'
ability to discern
headline accuracy or
share accurate news
decreases beliefs in true
headlines that it
mislabels as false and
increases beliefs for
false headlines that it is
unsure about
Why not let it self-regulate?
Johnson T, Kromka SM. Psychological, Communicative, and Relationship Characteristics That Relate to Social Media Users' Willingness to Denounce
Fake News. Cyberpsychol Behav Soc Netw. 2023 Jul;26(7):563-571. doi: 10.1089/cyber.2022.0204. Epub 2023 May 30. PMID: 37253156.
68% of social media users believe people should respond with a
correction when they witness the sharing of misinformation
73% of users elect to ignore misinformation posted online
self-esteem, argumentativeness, conflict style, and interpersonal
relationships relate to users’ willingness to denounce (or ignore)
disinformation.
avoiding arguments on social media is easier than confrontation and this
avoidance may take precedence if confrontation does not incentivise
social capital benefits.
users who are media literate and trained in argumentation can make a
difference - but are there enough of such users?
Can platforms ban disinformation?
Jahn, Laura & Kræmmer Rendsvig, Rasmus & Flammini, Alessandro & Menczer, Filippo & Hendricks, Vincent. (2023). Friction Interventions to Curb the
Spread of Misinformation on Social Media, https://arxiv.org/abs/2307.11498
Do they want to?
✓ information (esp. false) is driving traffic (i.e. profit) into platforms
Can they identify all false information with certainty?
✓ impossible as in most cases subjective views are involved (see next slide)
Do we want them to?
✓ debatable as we also need to preserve freedom of speech (see next slide)
Can they somehow help?
✓ Yes, by introducing transparency in their way of operation
✓ Yes, by introducing at least some fact-checking on clear cases
✓ Yes, by introducing friction, i.e. making it less easy for users to
propagate questionable information*
Should we ban d/misinformation?
Would it be ideal for the society to ban all false information?
✓ This would mean allowing only the truth
How do we define truth?
✓ The correspondence theory: A belief is true if it corresponds to the
way things actually are – to the facts
✓ The coherence theory: A belief is true if it is part of a coherent
system of beliefs
✓ Pragmatist theories: Truth is satisfactory to believe. True beliefs will
remain settled at the end of prolonged inquiry.
Can we handle the truth?
✓ Imagine a society where everyone knows the truth about everything
Dealing with
(dis)information
All available tools should be used and more should built
Fact-checking should be intensified in a coordinated way
Media literacy should be organised centrally
Transparency should be enforced on the platforms
Laws and regulations should hold platforms, individuals and
organisations accountable
Strong coordination and collaboration is necessary
Thank you for your attention
Nikos Sarris
Senior Researcher
Media analysis, veri
fi
cation and retrieval team
Information Technologies Institute
Center for Research and Technology Hellas
mever.iti.gr - www.iti.gr - www.certh.gr
Advisor on media technologies
Athens Technology Centre
http://ilab.atc.gr
@nikossarris - https://www.linkedin.com/in/nsarris/
CONTACT
us
This project has received funding from the European Union
DIGITAL-2021-TRUST-01. Contract number: 101083756
meddmo.eu
Fact-Check by MedDMO
@MEDDMOhub
info@meddmo.eu

More Related Content

Similar to Disinformation challenges tools and techniques to deal or live with it

Social Media & Social Networking: A Cautionary Tale
Social Media & Social Networking: A Cautionary TaleSocial Media & Social Networking: A Cautionary Tale
Social Media & Social Networking: A Cautionary TaleMike Gotta
 
Cognitive security: all the other things
Cognitive security: all the other thingsCognitive security: all the other things
Cognitive security: all the other thingsSara-Jayne Terp
 
Researching Social Media – Big Data and Social Media Analysis
Researching Social Media – Big Data and Social Media AnalysisResearching Social Media – Big Data and Social Media Analysis
Researching Social Media – Big Data and Social Media AnalysisFarida Vis
 
Web Science Session 2: Social Media
Web Science Session 2: Social MediaWeb Science Session 2: Social Media
Web Science Session 2: Social MediaStefanie Panke
 
CLEAR Training: Social Media and Mobile Phones
CLEAR Training: Social Media and Mobile PhonesCLEAR Training: Social Media and Mobile Phones
CLEAR Training: Social Media and Mobile PhonesBeth Kanter
 
Mitigating Influence of Disinformation Propagation Using Uncertainty-Based Op...
Mitigating Influence of Disinformation Propagation Using Uncertainty-Based Op...Mitigating Influence of Disinformation Propagation Using Uncertainty-Based Op...
Mitigating Influence of Disinformation Propagation Using Uncertainty-Based Op...Shakas Technologies
 
Hacking CT Hacking for Diplomacy week 8
Hacking CT Hacking for Diplomacy week 8Hacking CT Hacking for Diplomacy week 8
Hacking CT Hacking for Diplomacy week 8Stanford University
 
Detection and resolution of rumours in social media
Detection and resolution of rumours in social mediaDetection and resolution of rumours in social media
Detection and resolution of rumours in social mediaObedullahFahad
 
SIEM Security: 2018 Media & Influencer Analysis
SIEM Security: 2018 Media & Influencer AnalysisSIEM Security: 2018 Media & Influencer Analysis
SIEM Security: 2018 Media & Influencer AnalysisZeno Group
 
New Media for Advocacy in Albania
New Media for Advocacy in AlbaniaNew Media for Advocacy in Albania
New Media for Advocacy in Albaniacrawte00
 
Amplification and Personalization: The impact of metrics, analytics, and algo...
Amplification and Personalization: The impact of metrics, analytics, and algo...Amplification and Personalization: The impact of metrics, analytics, and algo...
Amplification and Personalization: The impact of metrics, analytics, and algo...Nicole Blanchett
 
Veillant Media & Emotiveillance
Veillant Media & Emotiveillance  Veillant Media & Emotiveillance
Veillant Media & Emotiveillance Andrew_McStay
 
Data Science Popup Austin: The Science of Sharing
Data Science Popup Austin: The Science of Sharing Data Science Popup Austin: The Science of Sharing
Data Science Popup Austin: The Science of Sharing Domino Data Lab
 
Workshop Ngo Feb 13
Workshop Ngo Feb 13Workshop Ngo Feb 13
Workshop Ngo Feb 13Beth Kanter
 
NGO Workshop Mumbai
NGO Workshop MumbaiNGO Workshop Mumbai
NGO Workshop Mumbaiguest67f67c8
 
Eavesdropping on the Twitter Microblogging Site
Eavesdropping on the Twitter Microblogging SiteEavesdropping on the Twitter Microblogging Site
Eavesdropping on the Twitter Microblogging SiteShalin Hai-Jew
 

Similar to Disinformation challenges tools and techniques to deal or live with it (20)

Ecology Modeling
Ecology ModelingEcology Modeling
Ecology Modeling
 
Social Media & Social Networking: A Cautionary Tale
Social Media & Social Networking: A Cautionary TaleSocial Media & Social Networking: A Cautionary Tale
Social Media & Social Networking: A Cautionary Tale
 
Cognitive security: all the other things
Cognitive security: all the other thingsCognitive security: all the other things
Cognitive security: all the other things
 
Researching Social Media – Big Data and Social Media Analysis
Researching Social Media – Big Data and Social Media AnalysisResearching Social Media – Big Data and Social Media Analysis
Researching Social Media – Big Data and Social Media Analysis
 
Web Science Session 2: Social Media
Web Science Session 2: Social MediaWeb Science Session 2: Social Media
Web Science Session 2: Social Media
 
CLEAR Training: Social Media and Mobile Phones
CLEAR Training: Social Media and Mobile PhonesCLEAR Training: Social Media and Mobile Phones
CLEAR Training: Social Media and Mobile Phones
 
Mitigating Influence of Disinformation Propagation Using Uncertainty-Based Op...
Mitigating Influence of Disinformation Propagation Using Uncertainty-Based Op...Mitigating Influence of Disinformation Propagation Using Uncertainty-Based Op...
Mitigating Influence of Disinformation Propagation Using Uncertainty-Based Op...
 
Social Media Training
Social Media TrainingSocial Media Training
Social Media Training
 
Hacking CT Hacking for Diplomacy week 8
Hacking CT Hacking for Diplomacy week 8Hacking CT Hacking for Diplomacy week 8
Hacking CT Hacking for Diplomacy week 8
 
Detection and resolution of rumours in social media
Detection and resolution of rumours in social mediaDetection and resolution of rumours in social media
Detection and resolution of rumours in social media
 
SIEM Security: 2018 Media & Influencer Analysis
SIEM Security: 2018 Media & Influencer AnalysisSIEM Security: 2018 Media & Influencer Analysis
SIEM Security: 2018 Media & Influencer Analysis
 
Designing for (Local) Community
Designing for (Local) CommunityDesigning for (Local) Community
Designing for (Local) Community
 
Compasspoint
CompasspointCompasspoint
Compasspoint
 
New Media for Advocacy in Albania
New Media for Advocacy in AlbaniaNew Media for Advocacy in Albania
New Media for Advocacy in Albania
 
Amplification and Personalization: The impact of metrics, analytics, and algo...
Amplification and Personalization: The impact of metrics, analytics, and algo...Amplification and Personalization: The impact of metrics, analytics, and algo...
Amplification and Personalization: The impact of metrics, analytics, and algo...
 
Veillant Media & Emotiveillance
Veillant Media & Emotiveillance  Veillant Media & Emotiveillance
Veillant Media & Emotiveillance
 
Data Science Popup Austin: The Science of Sharing
Data Science Popup Austin: The Science of Sharing Data Science Popup Austin: The Science of Sharing
Data Science Popup Austin: The Science of Sharing
 
Workshop Ngo Feb 13
Workshop Ngo Feb 13Workshop Ngo Feb 13
Workshop Ngo Feb 13
 
NGO Workshop Mumbai
NGO Workshop MumbaiNGO Workshop Mumbai
NGO Workshop Mumbai
 
Eavesdropping on the Twitter Microblogging Site
Eavesdropping on the Twitter Microblogging SiteEavesdropping on the Twitter Microblogging Site
Eavesdropping on the Twitter Microblogging Site
 

Recently uploaded

Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfngoud9212
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Neo4j
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsPrecisely
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 

Recently uploaded (20)

Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort ServiceHot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdf
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power Systems
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 

Disinformation challenges tools and techniques to deal or live with it

  • 1. This project has received funding from the European Union DIGITAL-2021-TRUST-01. Contract number: 101083756 meddmo.eu Disinformation challenges, tools and techniques to deal or live with it Nikos Sarris Media analysis, veri fi cation and retrieval group (MeVer) Center for Research and Technology Hellas (CERTH)
  • 2. Body Level One Body Level Two Body Level Three Body Level Four Body Level Five What happens is posted, but has what’s posted really happened? huge masses of information are produced by huge numbers of sources and delivered to huge audiences. Some is bound to be fake... the challenge How can we help people distinguish truth from lies?
  • 3. Disinformation Misinformation Malinformation Information that is false and deliberately created to harm a person, social group, organization or country. Information that is false, but not created with the intention of causing harm. Information that is based on reality, used to inflict harm on a person, organization or country. INFORMATION DISORDER: Toward an interdisciplinary framework for research and policy making Claire Wardle, Hossein Derakhshan Council of Europe report DGI(2017)09
  • 4. …as old as time A challenge… high cost in both cases {disinformation} {misinformation}
  • 5. much harder today •the rapid growth of social media and the Internet has made it easier for false information to spread quickly and widely •people are often more likely to share or believe false information that confirms their preexisting beliefs or opinions (confirmation bias) •the rise of generative AI (e.g. deepfakes, ChatGPT) is making it increasingly difficult to distinguish between real and fake •political polarisation and the erosion of trust in traditional media sources has created an environment in which disinformation can thrive •disinformation campaigns are often carried out by governments, political organisations, and other powerful groups with an agenda to manipulate public opinion and undermine democratic processes
  • 6. the role of social media •Amplification: algorithms, that actually reward users for sharing content, can amplify false information, making it more visible and easier to spread. •Speed of dissemination: false information to spread quickly and widely •Anonymity: sources can spread false information anonymously, making it harder to hold them accountable •Viral Sensationalism: False information that is sensational or attention- grabbing is more likely to go viral •Filter bubbles: users are commonly exposed to information that aligns with their existing beliefs and opinions, making it harder to challenge false information (debatable)
  • 7. comes in many different shapes •Misleading headlines attract clicks and generate traffic, even if the content of the article doesn't support the headline •Hoaxes and conspiracies: Spread of false information with the intent to deceive, often involving politically motivated or sensational claims. •Fabricated stories with the intention of misleading people. •Satire and parody: although created for satirical purposes can be misinterpreted •Political propaganda: to promote a political agenda or discredit opponents. •Manipulated media: more attractive, persuasive and tempting to share
  • 8. getting harder as we speak AI synthetic generation of text and visuals (combined?) is pushing the challenge to a next level Image synthetically generated by DALLE-2 Most text of this presentation up to this slide was generated by ChatGPT
  • 9. the impact can be very high •Misleading the public: the spread false or misleading information can impact individuals' decision-making and beliefs. •Undermining democracy: false or misleading information can be used to manipulate elections and undermine trust in democratic institutions •Exacerbating conflicts: disinformation can fuel social and political tensions, leading to increased conflict and violence •Damaging reputations: the distribution of various rumors and claims is frequently used to harm the reputation of individuals and organizations •Impairing public health: conspiracy theories and dangerous misinformation, like non-scientifically based medical advice about causes, symptoms, and treatments
  • 10. deal with detect model How can we… … (dis)information
  • 11. Modeling (dis)information How can we decompose information into measurable signals of credibility
  • 12. Contributor Content Context What can we find about the source of information? Does everything contextualise together? Does the posted content look reliable? The CCC model
  • 13. Contributor: Who says? 1 2 3 4 5 Reputation: what do people think of this source? History: what is the past activity of this source? Presence: where does this source exist? Influence: what happens because of this source? Popularity: who follows this source?
  • 14. Content: Sounds real? 1 2 3 4 5 Quality: what is the text/visual/audio style like? Popularity: what is the social interaction with it? Authenticity: has it been manipulated/ synthetically generated? History: can it be found in past publications? Reputation: how is it referenced by others?
  • 15. Context: Does it stick together? 1 2 3 4 5 Cross-check: are there any similar reports? Diversity: are there multiple coherent reports? Provenance: how has this travelled through time? Influence: what happens because of this? Proximity: do source locations relate to events?
  • 16. The ΑΜΙ model INFORMATION DISORDER: Toward an interdisciplinary framework for research and policy making Claire Wardle, Hossein Derakhshan Council of Europe report DGI(2017)09
  • 17. The DACTA model Detecting, Assessing, Countering Training & raising Awareness
  • 18. The BLOC model Nwala, A.C., Flammini, A. & Menczer, F. A language framework for modeling social media account behavior. EPJ Data Sci. 12, 33 (2023). https://doi.org/10.1140/epjds/s13688-023-00410-9 A language framework for modelling social media account behaviours consisting of symbols representing user actions and content Action alphabet T:: Post message P:: Reply to friend p:: Reply to non-friend π:: Reply to own post R:: Reshare friend’s post r:: Reshare non-friend’s post ρ:: Reshare own post Content alphabet t:: Text H:: Hashtag M:: Mention of friend m:: Mention of non-friend q:: Quote of other’s post ϕ:: Quote of own post E:: Media object U:: link (URL)
  • 19. Modeling (dis)information Various models have been proposed The signals of credibility are many The most signals are addressed the better the assessment A wholistic view appears to be more constructive than a unit
  • 20. Detecting (dis)information Some tools that can help measure signals of credibility
  • 21. Assessment of each credibility signal Overall assessment of post credibility Overall assessment of each pillar of the model Assessment of trustworthiness Rule-based assessment of the credibility signals following the CCC model Further information provided for each pillar of the model
  • 22. Evolution towards transparency Model evolved to more intuitive pillars: Activity - Network - Influence Automatic assessment was removed A lot more data was added to aid the users into making their own assessments
  • 23.
  • 24. Visual heatmaps help users quickly understand the results 15 image analysis algorithms help uncover possible forgeries Heatmap overlay on image helps users spot manipulated areas Image Verification Assistant Quickly analyse images to identify possible forgeries Manual annotation space helps users discuss findings
  • 25. Temporal video segmentation illustrating the manipulation probability of every segment Top level analysis for the entire video Image/video player window for detailed viewing Deepfake detection Assess the possibility of deepfake manipulation in image or video Detailed analysis per video segment
  • 26. Example images from the most likely location illustrating visual similarity with the query image Map illustrating the most likely location for the image Window for detailed image viewing Location estimation Estimate the most likely location for an image based on visual cues
  • 27. Objects and actions are automatically identified and added as tags Automatic image analysis creates short descriptions Assets are automatically classified to specific categories, as disturbing, NSFW or memes Multimedia Archive A platform where you can easily analyse, find and annotate media Any tag can be manually added by users
  • 28. Results include images with flags of visual similarity to the query window Window for selecting part of image that search should be based on Search by visual similarity Find images that are visually and semantically similar Selecting to search for similar images
  • 29. Results include images that are almost (visually) identical to the query image Search by visual similarity Find images that are visually nearly identical Selecting to search for near duplicate images
  • 30. Results include video that includes scenes that are visually identical to scenes from the query video Search by visual similarity (video) Find videos that contain common scenes Selecting to search for near duplicate video Frames from the retrieved video Frames from the query video
  • 31. Checks Google and Wolfram Knowledge Bases to return relevant results Fact-check a claim automatically Automatic fact-checking by Claimbuster Returns relevant fact-checks using the Google Fact- Check Explorer
  • 32. Converts audio from an event into text and then searches for matches among the previously published fact- checks in the ClaimReview database Fact-check a live claim automatically Live, automated fact-checking during political events by Squash Relevant matches are chosen by human editors and displayed on users’ screens within seconds of the politician making the claim
  • 33. Monitors Twitter accounts and alerts when a new tweet is classified as check-worthy Spot check-worthy posts Automatically notifying on check-worthy posts by ClaimHunter Beltrán, Javier et al. “ClaimHunter: An Unattended Tool for Automated Claim Detection on Twitter.” KnOD@WWW (2021).
  • 34. Collaborative verification • Provides ways to easily discover content from various sources • Organises information in real-time shareable views • Provides access to internal and external archives • Allows use of many 3rd party tools • Easily exports findings to share or publish stories • Used in EDMO and hubs to easily share their findings
  • 35.
  • 36.
  • 37.
  • 39. Detecting (dis)information Various tools are available but none is the silver bullet Many more can be built, but still not the silver bullet Combining the findings from many tools is important Collaboration can help, but is hard to achieve
  • 40. Dealing with (dis)information Can we eradicate all information that is false?
  • 41. EU action plan •Increase transparency of online political advertising •Improve resilience of citizens against disinformation •Enforce accountability of online platforms and other actors for the spread of disinformation •Enhance cooperation between EU Member States, online platforms, and other stakeholders •Protect the confidentiality of journalists' sources and the freedom of the press https://digital-strategy.ec.europa.eu/en/library/action-plan-against-disinformation
  • 42. Collaboration - EDMO and 14 hubs
  • 43. Fact checking - 97 initiatives
  • 44. Fact checking - IFCN signatories
  • 45. Fact checking - EFCSN recently launched
  • 46. Why don’t we just ask ChatGPT? Matthew R. DeVerna†, Harry Yaojun Yan, Kai-Cheng Yang, Filippo Menczer, Artificial intelligence is ineffective and potentially harmful for fact checking Observatory on Social Media, Indiana University, https://doi.org/10.48550/arXiv.2308.10800
  • 47. Why don’t we just ask ChatGPT? Matthew R. DeVerna†, Harry Yaojun Yan, Kai-Cheng Yang, Filippo Menczer, Artificial intelligence is ineffective and potentially harmful for fact checking Observatory on Social Media, Indiana University, https://doi.org/10.48550/arXiv.2308.10800 does not significantly affect participants' ability to discern headline accuracy or share accurate news decreases beliefs in true headlines that it mislabels as false and increases beliefs for false headlines that it is unsure about
  • 48. Why not let it self-regulate? Johnson T, Kromka SM. Psychological, Communicative, and Relationship Characteristics That Relate to Social Media Users' Willingness to Denounce Fake News. Cyberpsychol Behav Soc Netw. 2023 Jul;26(7):563-571. doi: 10.1089/cyber.2022.0204. Epub 2023 May 30. PMID: 37253156. 68% of social media users believe people should respond with a correction when they witness the sharing of misinformation 73% of users elect to ignore misinformation posted online self-esteem, argumentativeness, conflict style, and interpersonal relationships relate to users’ willingness to denounce (or ignore) disinformation. avoiding arguments on social media is easier than confrontation and this avoidance may take precedence if confrontation does not incentivise social capital benefits. users who are media literate and trained in argumentation can make a difference - but are there enough of such users?
  • 49. Can platforms ban disinformation? Jahn, Laura & Kræmmer Rendsvig, Rasmus & Flammini, Alessandro & Menczer, Filippo & Hendricks, Vincent. (2023). Friction Interventions to Curb the Spread of Misinformation on Social Media, https://arxiv.org/abs/2307.11498 Do they want to? ✓ information (esp. false) is driving traffic (i.e. profit) into platforms Can they identify all false information with certainty? ✓ impossible as in most cases subjective views are involved (see next slide) Do we want them to? ✓ debatable as we also need to preserve freedom of speech (see next slide) Can they somehow help? ✓ Yes, by introducing transparency in their way of operation ✓ Yes, by introducing at least some fact-checking on clear cases ✓ Yes, by introducing friction, i.e. making it less easy for users to propagate questionable information*
  • 50. Should we ban d/misinformation? Would it be ideal for the society to ban all false information? ✓ This would mean allowing only the truth How do we define truth? ✓ The correspondence theory: A belief is true if it corresponds to the way things actually are – to the facts ✓ The coherence theory: A belief is true if it is part of a coherent system of beliefs ✓ Pragmatist theories: Truth is satisfactory to believe. True beliefs will remain settled at the end of prolonged inquiry. Can we handle the truth? ✓ Imagine a society where everyone knows the truth about everything
  • 51. Dealing with (dis)information All available tools should be used and more should built Fact-checking should be intensified in a coordinated way Media literacy should be organised centrally Transparency should be enforced on the platforms Laws and regulations should hold platforms, individuals and organisations accountable Strong coordination and collaboration is necessary
  • 52. Thank you for your attention Nikos Sarris Senior Researcher Media analysis, veri fi cation and retrieval team Information Technologies Institute Center for Research and Technology Hellas mever.iti.gr - www.iti.gr - www.certh.gr Advisor on media technologies Athens Technology Centre http://ilab.atc.gr @nikossarris - https://www.linkedin.com/in/nsarris/
  • 53. CONTACT us This project has received funding from the European Union DIGITAL-2021-TRUST-01. Contract number: 101083756 meddmo.eu Fact-Check by MedDMO @MEDDMOhub info@meddmo.eu