Artificial Intelligence, one more weapon in the fight against disinformation: overview in the IFCN Hispanic network of fact-checkers (DataJConf 2023, Zurich)
Artificial intelligence is being used by some fact-checking organizations to aid in their work, though implementation is uneven and still developing. Around half of the 17 organizations studied have created 1-3 AI tools, mainly chatbots, to help with tasks like verifying sources and statements. However, many lack dedicated research teams or funding. While AI offers opportunities to increase efficiency, fact-checkers also see challenges from advances like deepfakes requiring new verification methods. Ongoing collaboration and adapting to changing information threats will help fact-checkers continue leveraging AI responsibly.
Similar to Artificial Intelligence, one more weapon in the fight against disinformation: overview in the IFCN Hispanic network of fact-checkers (DataJConf 2023, Zurich)
Similar to Artificial Intelligence, one more weapon in the fight against disinformation: overview in the IFCN Hispanic network of fact-checkers (DataJConf 2023, Zurich) (20)
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Artificial Intelligence, one more weapon in the fight against disinformation: overview in the IFCN Hispanic network of fact-checkers (DataJConf 2023, Zurich)
1. Artificial Intelligence, one more
weapon in the fight against
disinformation: overview in the
IFCN Hispanic network of fact-
checkers
*R+D+i project “DESINFOPER”, PID2019-108956RB-I00.
other authors: Hada Sánchez and Sergio Martínez
speaker: María Sánchez González
University of Malaga, Spain | m.sanchezgonzalez@uma.es
2. The case of Spanish and Ibero-American baseline verification
platforms *members of IFCN's #CoronavirusFacts Alliance (17)
Research questions
How do these verifiers contemplate
Artificial Intelligence and its possible
applications for fact-checking?
Have these verifiers developed their
own initiatives and/or do they use
external tools? What kind of tools are
they used for?
To what extent do verifiers take
advantage of AI to carry out their
work? What are the limitations and
opportunities?
Fuente: IFCN en Facebook, mayo de 2020
3. [Census: 14, by 8/17 fact-checkers] [March-May
2022]
Collecting on an online form technological and
functional categories and variables such as:
● Context for technological innovation and the use of AI
in verifiers analyzed (existence or not of R+D+i
sections; profiles; perception of AI potential;
participation in programs or external collaborations;
etc.).
● Development of AI tools/initiatives for each of the
verifiers analyzed (Yes/No; typology; use in journalism
and verification; etc.).
Online consultation on AI projects (2022) +
information retrieved from interviews with
professionals responsible for some verifiers (2020-
21)
Collection of perception regarding digital verification tools
and regarding their plans to develop their own initiatives
based on big data and artificial intelligence in the short and
medium term.
Methodology
Localization, classification and analysis
of self-developed AI project
CORE COMPLEMENT
Vision of its responsible professionals
Documentary review | Expert vision (8
people) | etc.
4. Methodology
Basic data of projects analyzed (17) and interviews/ contacts made
Fact-checker Country Person Interviewed (2020-21) Person contacted online (2022)
Agencia Lupa Brasil -
Agencia Ocote (Fáctica) Guatemala Alejandra Gutiérrez (directora y coord. editorial) Alejandra Gutiérrez (directora y coord. editorial)
Animal Político (El Sabueso) México Tania Montalvo (editora general)
Aos Fatos Brasil -
Bolivia Verifica Bolivia -
Chequeado Argentina Laura Zommer (directora ejecutiva y periodística)
Colombia Check Colombia - Jeanfreddy Gutiérrez (director)
Ecuador Chequea Ecuador Erika Astudillo (editora)
EFE Verifica España Desireé García (responsable)
Estadão Verifica Brasil -
La Silla Vacía (Detector de
mentiras) Colombia - María Echeverry (fact-checker)
Maldita (Maldito Bulo) España Clara Jiménez (cofundadora y codirectora)
Newtral España Joaquín Ortega (responsable de contenido)
Irene Larraz (coordinadora de fact-checking) y
Pablo Hernández
Observador Portugal -
Salud con Lupa (Comprueba) Perú Fabiola Torres (directora co-fundadora)
La República (Verificador) Perú Irene Ignacio (coordinadora de contenidos) Irene Ignacio (coordinadora de contenidos)
Professionals and experts
consulted on AI (8)
Miriam Hernanz (ex Lab RTVE, now Prisa
Media). Interview 23/03/21.
Rocío Celeste (experta en IA). Interview
25/03/21.
Ana Valdivia (experta en datos). Interview
26/03/21.
José Carlos Sánchez (Prodigioso Volcán).
Eirini Chatzikoumi (investigadora periodismo
e IA). Interview 01/04/21.
Idoia Salazar (periodista y presidenta de
OdiseIA). Interview 12/04/21.
Javier Cantón (experto en datos y
verificación). Interview 26/04/21.
David Fernández (Maldita). Interview 28/04/21.
5. ● Warning and early detection of disinformation in texts, statements,
images and videos (deep fakes).
● Time saving by automating routine tasks.
● Increasing productivity and efficiency, especially as AI algorithms
are trained.
● Etc.
General agreement among
professionals about the possibilities of
AI for fact-checking
Almost as a complement to their
human work as verifiers
Results and discussion
6. However, uneven and
incipient implementation
of AI in the verifiers
analyzed
Around half of the cases studied (8 out of
17 verifiers) have initiatives/tools based on
AI (14 localized in total) that apply to fact-
checking.
These are the most veteran and innovative
verifiers, such as the Spanish Maldita and
Newtral, with 3 initiatives each.
Practically all of the tools emerged in the
last 5 years (many, during the Covid-19
pandemic) and were available (also openly)
at the time of the study.
Fact-checker AI initiative(s) or tool(s) Year of
creation
Available
at study
time
Agencia Lupa Projeto Lupe! 2018 No
No Epicentro 2020 Yes
AOS Fatos Fátima 2018 Yes
Radar 2020 Yes
Colombia
Check
Redcheq 2019 No
Bolivia Verifica Olivia 2022 Yes
Chequeado Chequeabot 2016 Yes
Maldita Chatbot Maldita.es 2020 Yes
Maldita.es 2019 Yes
Caja de herramientas 2018 Yes
Newtral ClaimHunter 2020 Yes
Editor 2020 Yes
Servicio de verificación de WhatsApp 2020 Yes
EFE Verifica Videre AI 2022 Yes
7. Features of self-developed AI tools: chatbots for verification as
majority trends while others are still to be exploited (eg.
Generative AI or Deep Fakes)
Typology
Orientation regarding
journalistic work
Use in verification processes
1. Bot (8)
2. Service (4)
3. Online Aplication (2)
4. Mobile App (1)
5. Browser Extension (1)
6. Microsite (1)
1. Verification procedures (12)
2. Management and debugging
of massive databases (5)
3. Sending automated
responses/content to
audiences (5)
4. Automatic content generation
(2)
1. Source verification (11)
2. Verification of phrases that
have previously been
published (7)
3. Verification in text processing
and interpretation (6)
4. Verification of audio in text (6)
5. Automatic transcription (1)
8. Journalists
Teachers/ Professors
Health/other disciplines scientists
Computer programmers/developers
Statisticians/analysts/data scientists/ etc.
Linguists
Yes
No
The lack of economic resources, adapted structures and
specific profiles, causes of limited use of AI
*More than a third (76.5%) did not have their
own R+D+i section.
AI, an example of convergence between disciplines (Prodigioso Volcán, 2020: 54): importance of
interdisciplinary collaboration as an opportunity.
*Exceptions: Chequeado, Newtral, Maldita.es (and Aos Fatos, with specific sections
for technological innovation and multidisciplinary teams of professionals.
Does it have its R+D+i section or similar?
(17 answers)
Profile of professionals that make up teams linked to R+D+i
(4 answers)
9. Co-creation in different modalities and with various entities,
another opportunity: context of implementation of tools
Technological
innovation
projects/programs
Most arose and developed internally
by the fact-checkers.
Exceptions (3 of 14):
Collaborative
alliances within the
IFCN framework
External
collaborations in its
creation
More than half (10 of the 14) were
created and/or disseminated thanks
to alliances, collaborations or
contracts with external organizations.
The economic support of
large technology
companies such as
Facebook (3) or Google
(2), essential.
10. Changing context = limitations of the study: what the results would be like one
year later.
Double consequence of the vertiginous advance of AI, especially Generative
AI:
Final notes for discussion
1) New threats and questions. E.g. verifiers'
perception of deep fakes and other dangers of
the irresponsible use of generative AI to spread
disinformation; the effectiveness of the tools
developed by fact-checkers to verify content
generated by new AI Generative tools (that is, if
they recognize it as not real); etc.
2) New opportunities for fact-checkers. E.g.
innovations on their own development tools; incorporation
into their routines of new external AI tools; participation in
external verification/content generation AI projects as
experts or advisors: they know the questions to ask the
machines and are aware of the importance of certain “rules”
or good practices to avoid irresponsible use of these tools
by citizens or certain actors and, with it, misinformation; etc.
2023, year of AI
11. Thank you for your attention
Dra. María Sánchez González
Universidad de Málaga (and UNIA and Databeers Málaga) | m.sanchezgonzalez@uma.es
Research part of the project "The Impact of Disinformation in Journalism: Contents, Professional Routines and Audiences (DESINFOPER)”,
PID2019-108956RB-I00.
Special thanks to all fact-checking professionals and AI experts consuted.
Vectors taken from Flaticon/ Freepik.
Other authors: Dra. Hada M.Sánchez Gonzales y Sergio Martínez Gonzalo