h

❆
❆

Monitoring, Understanding, and
Influencing the Co-Spread of
COVID-19 Misinformation and
Fact-checks
GRÉGOIRE BUREL, HARITH ALANI
Knowledge Media Institute, The Open University, UK
Topics we will cover.
- How can we track and monitor
online misinformation?
- How can we visualise the
evolution of misinformation?
- How can we track the
effectiveness of fact-checks?
Monitoring
1
- How demographics spread
misinformation?
- How different types of
misinformation spreads
differently?
- How hard it is to annotate
COVID-19 information reliably.
Understanding
2
- Can we push reliable information
to individual at scale?
- Who may be receptive to fact-
checked information?
Influencing
3
Technology we will
use.
The Fact-checking
observatory
2
► Does fact-checking impact misinformation spread?
► Can we affect spreading behaviour?
► How to present the evolution of spread over time automatically?
Misinformation and Fact-Checking
during COVID-19
We monitor and analyse the propagation of misinformation and fact-checks on social
media and investigate methods for influencing spreading behaviour.
More than 16k fact-checks were
published during COVID-19
Misinformation still spreads
twice as much…*
*For fact-checked content.
3
Influencing
3
We push fact-checks to
misinformation spreaders and
check who might be influenced.
- What types of messages are
the most successful?
- What are the reactions of
misinformation spreaders?
Identify misinformation
spreaders and push fact-
checks.
Understanding
2
We analyse if fact-checks impact
the spread of misinformation.
- Who spreads
misinformation/fact-checks?
- What topics are resistant to
fact-checking?
- Does fact-checking impacts
misinformation spread?
A large-scale study on
the effectiveness of
fact-checking across
topics, demographics
and time.
Monitoring
1
We track the spread of COVID-
19* misinformation and fact-
checks on Twitter using claim
reviews and create weekly
reports about the reach of
misinforming posts and their
corresponding fact-checks on
Twitter.
twitter
*We also track Russo-Ukrainian war misinformation and collect
general fact-checks.
A continuously updated
database of
misinformation and fact-
checked URL mentions
on Twitter and weekly
spread reports.
Fact-checking
Observatory
4
The Misinformation and Fact-checks
Database / The Fact-checking Observatory
Monitoring and Visualising
1
The Misinformation and Fact-checks mentions
database.
Monitoring and Visualising
1
In order to track the spread of COVID-19*
misinformation and fact-checks on Twitter
we need to:
1. Identify and collect URLs pairs that
connect misinforming content to fact-
checks.
2. Track their mentions on social media.
We use the Claim Reviews published
IFCN signatories found in the Corona
Virus Facts alliance database.
6
The Misinformation and Fact-checking mentions
database.
Monitoring and Visualising
1
Fact-checking
URLs Data
Collection
Twitter Data
Collection Tweets
Claim
Reviews
Demographics
Extraction
FCO
Database
Topics
Continuous
tracking
Misinfo/
FC URLs
16,520+ COVID-19 Fact-checks
494,100+ COVID-related Tweets
twitter
7
Weekly Misinformation and Fact-checking Reports.
Monitoring and Visualising
1
8
► What misinformation keeps spreading?
► What fact-check spreads the most?
Weekly automatically generated reports
on the spread of misinformation and fact-
checks that include:
1. Key content and topics.
2. Fact-checking coverage.
3. Demographic impact.
The Fact-checking
Observatory.
The weekly reports help identifying
the evolution of key topics and key
misinforming content.
Monitoring and Visualising
1
9
The FCO homepage.
1
Monitoring and Visualising
1
Fact-checking Observatory
The timeline page.
2
Monitoring and Visualising
1
Fact-checking Observatory
The FCO reports.
3
Monitoring and Visualising
1
Fact-checking Observatory
Disclaimer
Trends
Versioning
Comparison with
previous week spread
New spread for the
current week
Disclaimer
Trends
Versioning
Comparison with
previous week spread
New spread for the
current week
Tree map showing overall
topic importance
Weekly topic spread
over time
Top content for current
week
Disclaimer
Trends
Versioning
Tree map showing overall
topic importance
Weekly topic spread
over time
Top content for current
week
Amount of fact-checks
per country
Tree map showing fact-
checks by language
Disclaimer
Trends
Versioning
Amount of fact-checks
per country
Tree map showing fact-
checks by language
Misinformation spread
(current week, previous
week, overall)
Fact-check
spread
Demographic spread
Exploring the Fact-checking
Observatory.
1 Monitoring and Visualising
Explore the FCO.*
Can you identify trends ? What
misinformation spreads the most ?
*Poynter has stopped updating its COVID-19 misinformation database. As a result, the Fact-checking Observatory has stopped
generating new COVID-19 reports.
Spread ratio Fact-checking delay
Topic spread
Fact-checker reports
Claim trend
Claim reports
Top spreaders
First/last spread
Most/least successful claims Associated claims/fact-checks
Demographic impact
Sharers locations
What’s next?
Monitoring and Visualising
1
Spread evolution
Ukraine/Russia
Reports
18
Demographics and topics impact on the co-
spread of COVID-19 misinformation and
fact-checks on Twitter
Understanding
2
Should all misinformation be fact-checked in the same way?
What is the relation between
misinformation and fact-check
spread?
1. Do misinformation and fact-checking
information spread similarly?
- Non-parametric MANOVA/ANOVA.
2. Does fact-checking spread affect the
diffusion of misinformation about Covid-
19?
- Weak causation analysis.
- Impulse response analysis and FEVD.
Who is most likely to spread misinformation / facts ?
Do fact-checks reduce misinformation spread?
7,370 Misinforming URLs
9,151 Fact-checking URLs
358,776 Tweets
Analysis on data
collected until
4th January 2021
Jan 2020 Apr 2020
Understanding
2
0 – 3 days
4 – 10 days
10+ days
initial
early
late
Analysed Periods
20
Understanding
2
Vaccines
Symptoms
Spread
Other
Cures
Conspiracy
Theories
Causes
Authorities
Fact-check URLs
shares
Misinformation URLs
shares
Topical misinformation and fact-checks spread (log scale).
Gender*
Male
Female
Account type
Non-org.
Org.
User Age
≥40
30-39
19-29
≤18
Demographic misinformation and fact-checks spread
(log scale).
Topics and Demographics.
Fact-checkers
annotations.
AI models make it possible to
identify demographics
automatically.
21
How do we identify topics ?
Understanding
2
Annotating and Classifying
The Annotation Classes
- Origins → Where COVID-19 emerged and how ?
- Transmission → How COVID-19 spreads ?
- Prevention and Cures → How to prevent or
cure COVID-19?
- Vaccine → COVID-19 vaccines.
- Conspiracy → COVID-19 conspiracies.
- Government and Authorities → How
governments and authorities have responded ?
- People and Organisations → What different
people or organisations have said about COVID-
19 ?
In order to better understand how misinformation
spread, we need to differentiate different
misinformation types / topics:
1. Define: What misinformation categories do we
identify?
2. Annotate/Classify: Humans annotate data in
order to create automatic classification models.
3. Analyse: Identify spreading patterns of specific
misinformation categories.
Train a machine learning model (Distil-BERT) for
classifying topics automatically.
22
Topics identification - Machines find it hard as well.
Understanding
2
Case 1 → World Health Organisation has “ruled out the effectiveness of any home remedy to combat COVID-19
our classifier labelled the post as “cures” and fact-checkers as “causes”.
Case 2 → i see more lies and bs coming from you alleged fact checkers than from mainstream news for example
this page absolute bs who are you funded by bill gates our classifier labelled the post as “conspiracy”
and fact-checkers as “causes”.
Case 3 → marco rubio says anthony fauci lied about masks fauci didnt the message on masks was primarily about
preserving a limited supply for health care workers who were at especially high risk of exposure our classifier
labelled the example as “spread” and fact-checkers as “authorities.
Case 4 → president cyril ramaphosa did not receive his covid 19 vaccine with a capped needle our classifier
labelled the post as “vaccine”, fact-checkers as “authorities”.
Case 5 → although us is pushing now for a leak out of a wuhan lab the recent research also says that a virus
found in pangolins banned smuggled in wuhan markets is a very close match to covid 19 not clear but could be
mutated once in humans may not reoccur labelled by our classifier as “spread”, and fact-checkers as
“causes”.
Case 6 → a newspaper clipping claiming that the bihar health department has found #COVID-19 in poultry
chicken samples it tested is fake labelled by our algorithm as “other”, whereas fact-checkers labelled it as
“causes”. 23
Understanding
2
Automatic Classification
- Human struggle to annotate the COVID-19
topics.
- Topics are very close, therefore, machine
learning models also struggle to
differentiate topics.
- E.g., Spread and Authorities, Causes
and conspiracies.
We do not have to rely on third-party annotations and
an automatic classification model.
For COVID-19 we have manual annotations from
fact-checkers.
Topics identification - Machines find it hard as well.
24
► Already fact-checked content re-spreading.
► Conspiracies and causes need to be addressed
differently than other topics.
► Fact-checkers republishing/reposting policy?
Stacked cumulative spread of misinforming and corrective information.
Global and topical spreading differences.
Initial onset period
until mid-March.
Late period from
mid-September.
Ramp-up period from
mid-March until mid-
September.
Jan 2020 Apr 2020 Jul 2020 Oct 2020 Jan 2120
2x more
misinform
ation.
Understanding
2
0 – 3 days
4 – 10 days
10+ days
initial
early
late
Global Topics
≠
“Converging”
behaviour.
=
≠
≠
≈/≠
=/≠
Period
Causes and conspiracies still spreading differently
in the late phase.
25
Short-term demographics differences.
Individuals vs. Organisations Gender*
Understanding
2
- Individuals spread more misinformation
than organisations.
- Most organisation-driven spread occurs in
the initial period.
- Individual spread of misinformation
continuing over long periods.
Individuals exposure to fact-checked
content over long periods is key.
- Females spread less misinformation than
males (but represent 40% of Twitter
userbase)
- Misinformation spread is independent of
gender.
- Same spreading behaviour → ∞.
Gender is not important when dealing with
misinformation spread.
AI models make it possible to
identify demographics
automatically
26
Fact-checking fast
spread response
Inconclusive
misinformation
response trend
Self initial response
(spread drop soon
after initial increase)
- Bidirectional weak causation between
misinformation and fact-checks spread.
- Fact-checking spread not clearly impacting
misinformation spread (impulse response and
FEVD).
- Fact-checks are quick to respond to
misinformation spread.
Weak impact of fact-checking
spread on misinformation spread*.
Understanding
2
► Make fact-checking content more sharable?
► Keep spreading fact-checks?
How to increase the impact/spread of fact-
checking content?
*globally for fact-checked content but not for all the topics..
27
Short term success in reducing misinformation spread. Hard to affect irrational misinformation spread.
Virus Causes
Misinformation spread
increasingly dependent on
fact-checks spread
Fact-checking
spreading initially
independently
Fact-checking
initially affecting
misinformation.
Fast fact-
checking
response
Inconclusive
misinformation response
trend
Understanding
2
Conspiracy Theories
28
► Conspiracies and causes need to be addressed
differently than other topics.
Topic Co-Spread
► No need to target gender specifically.
► Targeting long individual exposure to
misinformation.
► Make fact-checking content more sharable?
► Keep spreading fact-checks?
Misinformation and fact-checking spread.
- Misinformation spreads more than fact-checks.
- Fact-checking is fast to spread initially in response to misinformation spread.
- Weak bi-directional relation between fact-checks and misinformation spread.
Demographic Co-Spread
Overall Misinformation and Fact-checking Spread
- Misinformation spreads independently from
gender.
- Individuals spread more misinformation over
long periods.
- Misinformation topics continue spreading over
long time periods .
- Fact-checking spread impact on individual
topics tend to be short-term.
Understanding
2
29
Automatically Pushing Fact-checks to
Misinformation Spreaders
Influencing
3
How can we reach those not on the choir?
- Don’t think they need fact-checking tools…
- Don’t know about such tools...
- Not tech-savvy enough to install and use them…
Approach (No installation and corrections can be seen by
anyone):
1. Search for mentions of misinformation on Twitter.
2. Use templates for notifying misinformation to users.
Can we influence misinformation
spreading behaviour by pushing fact-
check content using a bot?
We need tools to push fact-checks when and where
necessary.
What templates are the most likely to affect users
spreading misinformation on social media?
What types of users are more likely to respond
favourably to the bot?
Influencing
2
31
The bot and reactions database.
Influencing
2
Fact-checking
URLs Data
Collection
React.
Tweets
FC
Database
Reaction
s
Database
Continuous
updating
3,989 tweets targeting 2,922 distinct
users.
Bot posts
Populate
Template and
Respond
Reactions and
Responses
Collection
Metadata
Collection and
Demographics
Tweets
Target
Resp.
Tweet
Templates
Database
Random
Template
Selection
1
2
3 4
5
6
FC database 140,000+ Fact-checks.*
*Includes general fact-checks besides COVID-19 fact-checks.
@MisinfomeB
32
What type of language should we use?
Influencing
2
The 7 Reply Templates
The 7 Reply Templates Style Amount
Please, note that the link you shared contains a claim that was fact-checked and appears to be <VERDICT>.
Fact-check: <FACT-CHECK-URL>
I’m a research bot fighting misinformation spread. Plz follow me & DM any feedback.
Factual 758
Oops… it seems something might be wrong! The link you shared contains a claim that was fact-checked <FACT-CHECK-URL> and
appears to be <VERDICT>.
I’m a research bot fighting misinformation spread. Plz follow me & DM any feedback.
Alerting 732
I’m a bot fighting misinformation spread. I noticed the link you shared contains a claim that was fact-checked <FACT-CHECK-URL>
and appears to be <VERDICT>.
Plz follow me & DM any feedback.
Identity 734
How about double-checking this? This link contains a claim that was fact-checked <FACT-CHECK-URL> and appears to be <VERDICT>.
I’m a research bot fighting misinformation spread. Plz follow me & DM any feedback.
Suggestive 750
I know, it's hard to distinguish fact from fiction 😩. The link you shared contains a claim that was fact-checked and appears to be
<VERDICT>. Fact-check: <FACT-CHECK-URL>.
I’m a research bot fighting misinformation spread. Plz follow me & DM any feedback.
Empathetic 281
Misinformation can be really harmful! 😬 Please, note that the link you shared contains a claim that was fact-checked and appears
to be <VERDICT>. Fact-check: <FACT-CHECK-URL>.
I’m a research bot fighting misinformation spread. Plz follow me & DM any feedback.
Alarming 32
Hi there! Please note that the link you shared contains a claim that was fact-checked and appears to be <VERDICT>. Fact-check
<FACT-CHECK-URL>.
I’m a research bot fighting misinformation spread. Plz follow me & DM any feedback.
Friendly 702
33
Measuring positive, negative
and neutral reactions.
Twitter metadata (e.g., retweets, replies,
etc.) can be used for determining if a user
is responding favourably to the bot
message.
Some bot reactions are ambiguous and
cannot be easily associated to a
positive or negative reaction.*
!
* Although replies can indicate either a positive or negative reaction, we
observed that ≈95% of the replies were actually negative reactions.
Therefore we consider having a reply as a negative reaction.
*
Influencing
2
Majority voting is used for deciding if a
message was perceived favourably or
negatively.
34
Influencing
2
Alarming
Alerting
Empathetic
Factual
Friendly
Identity
Suggestive
Positive reactions.
Negative
reactions.
Bot template positive and negative
reactions.
Negative
Positive
Unknown
Bot reactions.
► No significant relationships between template
and the +/-/unknown reactions (p = 0.3415).
► No significant relationships between template
and the +/- reactions (p = 0.1916)
Most users ignore the bot, only a few reaction are
positive and message style has not significant
impact on user reaction.
Do users react differently to
the bot messages and
templates?
1. How users react to the bot in
general?
2. Are reactions impacted by the
language used by the bot?
- Fisher’s exact test of independence with simulated p-
value.
- Post-Hoc test (Bonferroni adjusted).
35
What types of users are more
likely to respond favourably to
the bot?
- Rate Twitter Accounts → Analyse Twitter
user timelines in order to determine how
much misinformation is shared.
- Rate URLs an domains → Identify the
reliability of domains and/or URLs.
- Visualise and Understand → Visualise how
ratings were calculated.
MisinfoMe provide a score for each user
account that indicates if they generally tend
to share misinformation or reputable
information.
We hypothesise that users that are less
polarised towards misinformation (i.e., do
not always share misinformation) may be
more likely to respond favourably to the
bot
Approach:
1. We use the MisinfoMe tool for rating
each user account based on the type
of information they have shared in the
past.
2. We compare positive reactions to the
bot against the MisinfoMe credibility
rating.
Influencing
2
36
Influencing
2
Not credible
Not verifiable
Uncertain
Mostly credible
Credible
Credibility score and positive and
negative reactions.
Negative
Positive
Users reactions.
► Gender, org/non-org, and credibility scores are
not linked to the users reactions,
► Significant relationships between user
importance and reactions (p = 0.01719). For
low/medium (p =0.03097)
Overall, there are no significant differences
between user reaction and the different groups
except for user popularity.
How users types react to
the bot messages?
1. Do reactions differ across various
user demographics?
- We compare demographics (gender and
org/non-org), user popularity (followers)
and MisinfoMe score.
- Fisher’s exact test of independence with
simulated p-value.
- Post-Hoc test (Bonferroni adjusted).
Low (<194)
Medium
High (>745)
User polarity and positive and
negative reactions.
37
38
What are the most successful templates?
*
Common response patterns:
- Very few positive outcomes so
far.
- Most users ignore the bot or
respond neutrally to the bot.
- Blocking and replying as a
common pattern.
- People delete their posts but
also block the bot.
- No clear relation between
reply template and behaviour
→ more personal message may
be necessary.
Very few positive responses to the bot (1%) compared to the negative
responses (9.5%) → Unknown or no responses represent 89.5% of the reactions.
No clear relation between templates and user behaviour.
Influencing
2
Influencing
2
Distrust fact-checkers
“fact-checkers are paid by pharma industry”, “controlled by Facebook and government”, “they stop diversity of
opinion”, “who checks the fact-checker?”, “who’s paying them?”
Follow anti-fact-checking
sites
cite far-right websites that speak against fact-checkers - eg einprozent.de. (https://www.einprozent.de/correctiv-
das-zensurwerkzeug-der-elite )
Distrust governments
“if detox didn’t work why would money be paid into telling you NOT to do this after shot. Remember a billion $ was
put into #Vaccine “awareness & promotions” in US alone.”
Seek other supporting
articles
If there’s another article with similar claims that is not fact-checked then they feel they won the argument.
Refer to non-related
claims
Search claims to support their position, e.g., against a vaccine – “what about this, eh?”
Work in network They retweet and like each other’s tweets against the bot’s reply
Discrediting a source Point to officials who said something not entirely accurate to bring in doubt and reason to distrust everything
Accuse of censorship “Freedom of speech”, “a contested opinion is still an opinion”, “this is censorship”, “police state”, “ministry of truth”
Anti-bots
“you are a bot” , “you are a big pharma bot”, “When ‘They’ send a fact bot after me …then i know I’m on to
something”
The replies to the bot posts.
211
Appreciative “Thank you, It's hard to find the right information ” , “Thanks for the reaction”.
😱
😍
3,989
≈95%
39
😱
👽
🤗
Observations and future directions.
Influencing
2
Observations
- How to improve outcomes?
- So far the responses to the bot have been mostly
negative.
- In general, no significant relationships between
messages types, and user types except for user
popularity (low/medium popularity).
- Users block the bot… but delete their post.
- Bot message may be seen by users following the
conversation.
- Unknown reactions may still impact future
misinformation sharing.
- Personalise messages based on topic, demographic
and user.
- Adapt to user popularity.
- Consider visual messages/templates and response
generation (LLMs).
40
1
3
Monitoring misinformation
Understanding – Fact-checking across crises
Influencing – Optimising misinformation countering
- Beyond COVID-19 / Russo-Ukrainian War.
- Global Fact-checking reports.
- Global co-spreading patterns across crises (COVID-19 vs. Russo-
Ukrainian War).
- Fact-checking Publication impact analysis.
- Prejudiced cross-border misinformation analysis.
- Causal analysis.
- Personalisation of responses depending on user (e.g., conspiracy
theorist, influencer, etc.).
- Visual templates / LLMs.
- “Countering Prejudice with Strategic Fact-Checking”
Future directions.
Understanding misinformation at
scale.
2
New misinformation
spread.
Is the bot message
published?
How the user misinformation
spread may be influenced.
41
❆
á
h
á
❆
❆
h
á
Thank you.
Grégoire Burel.
@evhart
fcobservatory.org
g.burel@open.ac.uk
Monitoring, Understanding and Influencing the Co-Spread of
COVID-19 Misinformation and Fact-checks
Background illustrations: Myth busters created by Redgirl Lee for United Nations Global Call Out To Creatives.
github.com/evhart
W

Monitoring, Understanding, and Influencing the Co-Spread of COVID-19 Misinformation and Fact-checks

  • 1.
    h  ❆ ❆  Monitoring, Understanding, and Influencingthe Co-Spread of COVID-19 Misinformation and Fact-checks GRÉGOIRE BUREL, HARITH ALANI Knowledge Media Institute, The Open University, UK
  • 2.
    Topics we willcover. - How can we track and monitor online misinformation? - How can we visualise the evolution of misinformation? - How can we track the effectiveness of fact-checks? Monitoring 1 - How demographics spread misinformation? - How different types of misinformation spreads differently? - How hard it is to annotate COVID-19 information reliably. Understanding 2 - Can we push reliable information to individual at scale? - Who may be receptive to fact- checked information? Influencing 3 Technology we will use. The Fact-checking observatory 2
  • 3.
    ► Does fact-checkingimpact misinformation spread? ► Can we affect spreading behaviour? ► How to present the evolution of spread over time automatically? Misinformation and Fact-Checking during COVID-19 We monitor and analyse the propagation of misinformation and fact-checks on social media and investigate methods for influencing spreading behaviour. More than 16k fact-checks were published during COVID-19 Misinformation still spreads twice as much…* *For fact-checked content. 3
  • 4.
    Influencing 3 We push fact-checksto misinformation spreaders and check who might be influenced. - What types of messages are the most successful? - What are the reactions of misinformation spreaders? Identify misinformation spreaders and push fact- checks. Understanding 2 We analyse if fact-checks impact the spread of misinformation. - Who spreads misinformation/fact-checks? - What topics are resistant to fact-checking? - Does fact-checking impacts misinformation spread? A large-scale study on the effectiveness of fact-checking across topics, demographics and time. Monitoring 1 We track the spread of COVID- 19* misinformation and fact- checks on Twitter using claim reviews and create weekly reports about the reach of misinforming posts and their corresponding fact-checks on Twitter. twitter *We also track Russo-Ukrainian war misinformation and collect general fact-checks. A continuously updated database of misinformation and fact- checked URL mentions on Twitter and weekly spread reports. Fact-checking Observatory 4
  • 5.
    The Misinformation andFact-checks Database / The Fact-checking Observatory Monitoring and Visualising 1
  • 6.
    The Misinformation andFact-checks mentions database. Monitoring and Visualising 1 In order to track the spread of COVID-19* misinformation and fact-checks on Twitter we need to: 1. Identify and collect URLs pairs that connect misinforming content to fact- checks. 2. Track their mentions on social media. We use the Claim Reviews published IFCN signatories found in the Corona Virus Facts alliance database. 6
  • 7.
    The Misinformation andFact-checking mentions database. Monitoring and Visualising 1 Fact-checking URLs Data Collection Twitter Data Collection Tweets Claim Reviews Demographics Extraction FCO Database Topics Continuous tracking Misinfo/ FC URLs 16,520+ COVID-19 Fact-checks 494,100+ COVID-related Tweets twitter 7
  • 8.
    Weekly Misinformation andFact-checking Reports. Monitoring and Visualising 1 8
  • 9.
    ► What misinformationkeeps spreading? ► What fact-check spreads the most? Weekly automatically generated reports on the spread of misinformation and fact- checks that include: 1. Key content and topics. 2. Fact-checking coverage. 3. Demographic impact. The Fact-checking Observatory. The weekly reports help identifying the evolution of key topics and key misinforming content. Monitoring and Visualising 1 9
  • 10.
    The FCO homepage. 1 Monitoringand Visualising 1 Fact-checking Observatory
  • 11.
    The timeline page. 2 Monitoringand Visualising 1 Fact-checking Observatory
  • 12.
    The FCO reports. 3 Monitoringand Visualising 1 Fact-checking Observatory
  • 13.
    Disclaimer Trends Versioning Comparison with previous weekspread New spread for the current week
  • 14.
    Disclaimer Trends Versioning Comparison with previous weekspread New spread for the current week Tree map showing overall topic importance Weekly topic spread over time Top content for current week
  • 15.
    Disclaimer Trends Versioning Tree map showingoverall topic importance Weekly topic spread over time Top content for current week Amount of fact-checks per country Tree map showing fact- checks by language
  • 16.
    Disclaimer Trends Versioning Amount of fact-checks percountry Tree map showing fact- checks by language Misinformation spread (current week, previous week, overall) Fact-check spread Demographic spread
  • 17.
    Exploring the Fact-checking Observatory. 1Monitoring and Visualising Explore the FCO.* Can you identify trends ? What misinformation spreads the most ? *Poynter has stopped updating its COVID-19 misinformation database. As a result, the Fact-checking Observatory has stopped generating new COVID-19 reports.
  • 18.
    Spread ratio Fact-checkingdelay Topic spread Fact-checker reports Claim trend Claim reports Top spreaders First/last spread Most/least successful claims Associated claims/fact-checks Demographic impact Sharers locations What’s next? Monitoring and Visualising 1 Spread evolution Ukraine/Russia Reports 18
  • 19.
    Demographics and topicsimpact on the co- spread of COVID-19 misinformation and fact-checks on Twitter Understanding 2
  • 20.
    Should all misinformationbe fact-checked in the same way? What is the relation between misinformation and fact-check spread? 1. Do misinformation and fact-checking information spread similarly? - Non-parametric MANOVA/ANOVA. 2. Does fact-checking spread affect the diffusion of misinformation about Covid- 19? - Weak causation analysis. - Impulse response analysis and FEVD. Who is most likely to spread misinformation / facts ? Do fact-checks reduce misinformation spread? 7,370 Misinforming URLs 9,151 Fact-checking URLs 358,776 Tweets Analysis on data collected until 4th January 2021 Jan 2020 Apr 2020 Understanding 2 0 – 3 days 4 – 10 days 10+ days initial early late Analysed Periods 20
  • 21.
    Understanding 2 Vaccines Symptoms Spread Other Cures Conspiracy Theories Causes Authorities Fact-check URLs shares Misinformation URLs shares Topicalmisinformation and fact-checks spread (log scale). Gender* Male Female Account type Non-org. Org. User Age ≥40 30-39 19-29 ≤18 Demographic misinformation and fact-checks spread (log scale). Topics and Demographics. Fact-checkers annotations. AI models make it possible to identify demographics automatically. 21
  • 22.
    How do weidentify topics ? Understanding 2 Annotating and Classifying The Annotation Classes - Origins → Where COVID-19 emerged and how ? - Transmission → How COVID-19 spreads ? - Prevention and Cures → How to prevent or cure COVID-19? - Vaccine → COVID-19 vaccines. - Conspiracy → COVID-19 conspiracies. - Government and Authorities → How governments and authorities have responded ? - People and Organisations → What different people or organisations have said about COVID- 19 ? In order to better understand how misinformation spread, we need to differentiate different misinformation types / topics: 1. Define: What misinformation categories do we identify? 2. Annotate/Classify: Humans annotate data in order to create automatic classification models. 3. Analyse: Identify spreading patterns of specific misinformation categories. Train a machine learning model (Distil-BERT) for classifying topics automatically. 22
  • 23.
    Topics identification -Machines find it hard as well. Understanding 2 Case 1 → World Health Organisation has “ruled out the effectiveness of any home remedy to combat COVID-19 our classifier labelled the post as “cures” and fact-checkers as “causes”. Case 2 → i see more lies and bs coming from you alleged fact checkers than from mainstream news for example this page absolute bs who are you funded by bill gates our classifier labelled the post as “conspiracy” and fact-checkers as “causes”. Case 3 → marco rubio says anthony fauci lied about masks fauci didnt the message on masks was primarily about preserving a limited supply for health care workers who were at especially high risk of exposure our classifier labelled the example as “spread” and fact-checkers as “authorities. Case 4 → president cyril ramaphosa did not receive his covid 19 vaccine with a capped needle our classifier labelled the post as “vaccine”, fact-checkers as “authorities”. Case 5 → although us is pushing now for a leak out of a wuhan lab the recent research also says that a virus found in pangolins banned smuggled in wuhan markets is a very close match to covid 19 not clear but could be mutated once in humans may not reoccur labelled by our classifier as “spread”, and fact-checkers as “causes”. Case 6 → a newspaper clipping claiming that the bihar health department has found #COVID-19 in poultry chicken samples it tested is fake labelled by our algorithm as “other”, whereas fact-checkers labelled it as “causes”. 23
  • 24.
    Understanding 2 Automatic Classification - Humanstruggle to annotate the COVID-19 topics. - Topics are very close, therefore, machine learning models also struggle to differentiate topics. - E.g., Spread and Authorities, Causes and conspiracies. We do not have to rely on third-party annotations and an automatic classification model. For COVID-19 we have manual annotations from fact-checkers. Topics identification - Machines find it hard as well. 24
  • 25.
    ► Already fact-checkedcontent re-spreading. ► Conspiracies and causes need to be addressed differently than other topics. ► Fact-checkers republishing/reposting policy? Stacked cumulative spread of misinforming and corrective information. Global and topical spreading differences. Initial onset period until mid-March. Late period from mid-September. Ramp-up period from mid-March until mid- September. Jan 2020 Apr 2020 Jul 2020 Oct 2020 Jan 2120 2x more misinform ation. Understanding 2 0 – 3 days 4 – 10 days 10+ days initial early late Global Topics ≠ “Converging” behaviour. = ≠ ≠ ≈/≠ =/≠ Period Causes and conspiracies still spreading differently in the late phase. 25
  • 26.
    Short-term demographics differences. Individualsvs. Organisations Gender* Understanding 2 - Individuals spread more misinformation than organisations. - Most organisation-driven spread occurs in the initial period. - Individual spread of misinformation continuing over long periods. Individuals exposure to fact-checked content over long periods is key. - Females spread less misinformation than males (but represent 40% of Twitter userbase) - Misinformation spread is independent of gender. - Same spreading behaviour → ∞. Gender is not important when dealing with misinformation spread. AI models make it possible to identify demographics automatically 26
  • 27.
    Fact-checking fast spread response Inconclusive misinformation responsetrend Self initial response (spread drop soon after initial increase) - Bidirectional weak causation between misinformation and fact-checks spread. - Fact-checking spread not clearly impacting misinformation spread (impulse response and FEVD). - Fact-checks are quick to respond to misinformation spread. Weak impact of fact-checking spread on misinformation spread*. Understanding 2 ► Make fact-checking content more sharable? ► Keep spreading fact-checks? How to increase the impact/spread of fact- checking content? *globally for fact-checked content but not for all the topics.. 27
  • 28.
    Short term successin reducing misinformation spread. Hard to affect irrational misinformation spread. Virus Causes Misinformation spread increasingly dependent on fact-checks spread Fact-checking spreading initially independently Fact-checking initially affecting misinformation. Fast fact- checking response Inconclusive misinformation response trend Understanding 2 Conspiracy Theories 28
  • 29.
    ► Conspiracies andcauses need to be addressed differently than other topics. Topic Co-Spread ► No need to target gender specifically. ► Targeting long individual exposure to misinformation. ► Make fact-checking content more sharable? ► Keep spreading fact-checks? Misinformation and fact-checking spread. - Misinformation spreads more than fact-checks. - Fact-checking is fast to spread initially in response to misinformation spread. - Weak bi-directional relation between fact-checks and misinformation spread. Demographic Co-Spread Overall Misinformation and Fact-checking Spread - Misinformation spreads independently from gender. - Individuals spread more misinformation over long periods. - Misinformation topics continue spreading over long time periods . - Fact-checking spread impact on individual topics tend to be short-term. Understanding 2 29
  • 30.
    Automatically Pushing Fact-checksto Misinformation Spreaders Influencing 3
  • 31.
    How can wereach those not on the choir? - Don’t think they need fact-checking tools… - Don’t know about such tools... - Not tech-savvy enough to install and use them… Approach (No installation and corrections can be seen by anyone): 1. Search for mentions of misinformation on Twitter. 2. Use templates for notifying misinformation to users. Can we influence misinformation spreading behaviour by pushing fact- check content using a bot? We need tools to push fact-checks when and where necessary. What templates are the most likely to affect users spreading misinformation on social media? What types of users are more likely to respond favourably to the bot? Influencing 2 31
  • 32.
    The bot andreactions database. Influencing 2 Fact-checking URLs Data Collection React. Tweets FC Database Reaction s Database Continuous updating 3,989 tweets targeting 2,922 distinct users. Bot posts Populate Template and Respond Reactions and Responses Collection Metadata Collection and Demographics Tweets Target Resp. Tweet Templates Database Random Template Selection 1 2 3 4 5 6 FC database 140,000+ Fact-checks.* *Includes general fact-checks besides COVID-19 fact-checks. @MisinfomeB 32
  • 33.
    What type oflanguage should we use? Influencing 2 The 7 Reply Templates The 7 Reply Templates Style Amount Please, note that the link you shared contains a claim that was fact-checked and appears to be <VERDICT>. Fact-check: <FACT-CHECK-URL> I’m a research bot fighting misinformation spread. Plz follow me & DM any feedback. Factual 758 Oops… it seems something might be wrong! The link you shared contains a claim that was fact-checked <FACT-CHECK-URL> and appears to be <VERDICT>. I’m a research bot fighting misinformation spread. Plz follow me & DM any feedback. Alerting 732 I’m a bot fighting misinformation spread. I noticed the link you shared contains a claim that was fact-checked <FACT-CHECK-URL> and appears to be <VERDICT>. Plz follow me & DM any feedback. Identity 734 How about double-checking this? This link contains a claim that was fact-checked <FACT-CHECK-URL> and appears to be <VERDICT>. I’m a research bot fighting misinformation spread. Plz follow me & DM any feedback. Suggestive 750 I know, it's hard to distinguish fact from fiction 😩. The link you shared contains a claim that was fact-checked and appears to be <VERDICT>. Fact-check: <FACT-CHECK-URL>. I’m a research bot fighting misinformation spread. Plz follow me & DM any feedback. Empathetic 281 Misinformation can be really harmful! 😬 Please, note that the link you shared contains a claim that was fact-checked and appears to be <VERDICT>. Fact-check: <FACT-CHECK-URL>. I’m a research bot fighting misinformation spread. Plz follow me & DM any feedback. Alarming 32 Hi there! Please note that the link you shared contains a claim that was fact-checked and appears to be <VERDICT>. Fact-check <FACT-CHECK-URL>. I’m a research bot fighting misinformation spread. Plz follow me & DM any feedback. Friendly 702 33
  • 34.
    Measuring positive, negative andneutral reactions. Twitter metadata (e.g., retweets, replies, etc.) can be used for determining if a user is responding favourably to the bot message. Some bot reactions are ambiguous and cannot be easily associated to a positive or negative reaction.* ! * Although replies can indicate either a positive or negative reaction, we observed that ≈95% of the replies were actually negative reactions. Therefore we consider having a reply as a negative reaction. * Influencing 2 Majority voting is used for deciding if a message was perceived favourably or negatively. 34
  • 35.
    Influencing 2 Alarming Alerting Empathetic Factual Friendly Identity Suggestive Positive reactions. Negative reactions. Bot templatepositive and negative reactions. Negative Positive Unknown Bot reactions. ► No significant relationships between template and the +/-/unknown reactions (p = 0.3415). ► No significant relationships between template and the +/- reactions (p = 0.1916) Most users ignore the bot, only a few reaction are positive and message style has not significant impact on user reaction. Do users react differently to the bot messages and templates? 1. How users react to the bot in general? 2. Are reactions impacted by the language used by the bot? - Fisher’s exact test of independence with simulated p- value. - Post-Hoc test (Bonferroni adjusted). 35
  • 36.
    What types ofusers are more likely to respond favourably to the bot? - Rate Twitter Accounts → Analyse Twitter user timelines in order to determine how much misinformation is shared. - Rate URLs an domains → Identify the reliability of domains and/or URLs. - Visualise and Understand → Visualise how ratings were calculated. MisinfoMe provide a score for each user account that indicates if they generally tend to share misinformation or reputable information. We hypothesise that users that are less polarised towards misinformation (i.e., do not always share misinformation) may be more likely to respond favourably to the bot Approach: 1. We use the MisinfoMe tool for rating each user account based on the type of information they have shared in the past. 2. We compare positive reactions to the bot against the MisinfoMe credibility rating. Influencing 2 36
  • 37.
    Influencing 2 Not credible Not verifiable Uncertain Mostlycredible Credible Credibility score and positive and negative reactions. Negative Positive Users reactions. ► Gender, org/non-org, and credibility scores are not linked to the users reactions, ► Significant relationships between user importance and reactions (p = 0.01719). For low/medium (p =0.03097) Overall, there are no significant differences between user reaction and the different groups except for user popularity. How users types react to the bot messages? 1. Do reactions differ across various user demographics? - We compare demographics (gender and org/non-org), user popularity (followers) and MisinfoMe score. - Fisher’s exact test of independence with simulated p-value. - Post-Hoc test (Bonferroni adjusted). Low (<194) Medium High (>745) User polarity and positive and negative reactions. 37
  • 38.
    38 What are themost successful templates? * Common response patterns: - Very few positive outcomes so far. - Most users ignore the bot or respond neutrally to the bot. - Blocking and replying as a common pattern. - People delete their posts but also block the bot. - No clear relation between reply template and behaviour → more personal message may be necessary. Very few positive responses to the bot (1%) compared to the negative responses (9.5%) → Unknown or no responses represent 89.5% of the reactions. No clear relation between templates and user behaviour. Influencing 2
  • 39.
    Influencing 2 Distrust fact-checkers “fact-checkers arepaid by pharma industry”, “controlled by Facebook and government”, “they stop diversity of opinion”, “who checks the fact-checker?”, “who’s paying them?” Follow anti-fact-checking sites cite far-right websites that speak against fact-checkers - eg einprozent.de. (https://www.einprozent.de/correctiv- das-zensurwerkzeug-der-elite ) Distrust governments “if detox didn’t work why would money be paid into telling you NOT to do this after shot. Remember a billion $ was put into #Vaccine “awareness & promotions” in US alone.” Seek other supporting articles If there’s another article with similar claims that is not fact-checked then they feel they won the argument. Refer to non-related claims Search claims to support their position, e.g., against a vaccine – “what about this, eh?” Work in network They retweet and like each other’s tweets against the bot’s reply Discrediting a source Point to officials who said something not entirely accurate to bring in doubt and reason to distrust everything Accuse of censorship “Freedom of speech”, “a contested opinion is still an opinion”, “this is censorship”, “police state”, “ministry of truth” Anti-bots “you are a bot” , “you are a big pharma bot”, “When ‘They’ send a fact bot after me …then i know I’m on to something” The replies to the bot posts. 211 Appreciative “Thank you, It's hard to find the right information ” , “Thanks for the reaction”. 😱 😍 3,989 ≈95% 39
  • 40.
    😱 👽 🤗 Observations and futuredirections. Influencing 2 Observations - How to improve outcomes? - So far the responses to the bot have been mostly negative. - In general, no significant relationships between messages types, and user types except for user popularity (low/medium popularity). - Users block the bot… but delete their post. - Bot message may be seen by users following the conversation. - Unknown reactions may still impact future misinformation sharing. - Personalise messages based on topic, demographic and user. - Adapt to user popularity. - Consider visual messages/templates and response generation (LLMs). 40
  • 41.
    1 3 Monitoring misinformation Understanding –Fact-checking across crises Influencing – Optimising misinformation countering - Beyond COVID-19 / Russo-Ukrainian War. - Global Fact-checking reports. - Global co-spreading patterns across crises (COVID-19 vs. Russo- Ukrainian War). - Fact-checking Publication impact analysis. - Prejudiced cross-border misinformation analysis. - Causal analysis. - Personalisation of responses depending on user (e.g., conspiracy theorist, influencer, etc.). - Visual templates / LLMs. - “Countering Prejudice with Strategic Fact-Checking” Future directions. Understanding misinformation at scale. 2 New misinformation spread. Is the bot message published? How the user misinformation spread may be influenced. 41
  • 42.
    ❆ á h á ❆ ❆ h á Thank you. Grégoire Burel. @evhart fcobservatory.org g.burel@open.ac.uk Monitoring,Understanding and Influencing the Co-Spread of COVID-19 Misinformation and Fact-checks Background illustrations: Myth busters created by Redgirl Lee for United Nations Global Call Out To Creatives. github.com/evhart W