Slides from International Journalism Festival 2023, AI and Disinformation panel. Here the video https://www.journalismfestival.com/programme/2023/ai-and-disinformation
1. AI and disinfo
Carola Frediani - Guerredirete.it
Newsletter: https://guerredirete.substack.com/
2. AI and public discourse:
“Words are important “ and they are all ambiguous:
- Intelligence
- Artificial/Human
- AGI
- Superintelligence
- Open(AI)
- Alignment
- (AI) Ethics
4. Criti-hype
Criticism that both feeds and feeds on hype as criti-hype, a term I find both absurd and ugly-cute
Lee Vincel
I'll say it: The AI panic is starting to feel like an advertisement.
Ben Collins
It accepts the technological or scientific supremacy of an elite group and therefore leaves them with the
power to suggest solutions.
Jürgen (tante) Geuter
This open letter — ironically but unsurprisingly — further fuels AI hype and makes it harder to tackle real,
already occurring AI harms. I suspect that it will benefit the companies that it is supposed to regulate, and not
society.
Arvind Narayanan
5. First rule of AI hype
Every story that magnifies the dangers of a superintelligent AI, that
anthropomorphizes it, that insists on the risks of a sentient AI is functional to
a deterministic narrative about AI. From this respect, it might be the other side
of those stories about AI solving the world's problems.
No better way to pique interest in a text generator than to suggest it holds such power -
Brian Merchant
It makes their new line of business look like it is much more powerful than it truly is —
and that is ultimately great for investors - Paris Marx
6. Second rule of AI hype
If AI seems human, check that you are not looking into a mirror. In fact, we tend to
anthropomorfize AI while we change and adapt our human environment to machines
On first point see: “Most of the breathless reporting on chatbots and "AI" feels like it was written
by a cat freaking out after seeing its reflection in a mirror - Janus Rose
On the second: Luciano Floridi
Corollary risk: dehumanization (see Resisting dehumanization in the age of AI - Emily Bender)
7. The brain is a computer is a brain
“We will break down the metaphor below, and show how the concepts
embedded within it afford the human mind less complexity than is owed, and
the computer more wisdom than is due”.
“Treating machines like people can lead to treating people like machines”.
(The brain is a computer is a brain: neuroscience’s internal debate and the social
significance of the Computational Metaphor - Baria, Cross)
8.
9. ChatGPT and the dots
This is a deliberate design choice, by ChatGPT in particular. You know, Google Bard
doesn’t do this. Google Bard is a system for making queries and getting answers.
ChatGPT puts little three dots [as if it’s] “thinking” just like your text message does.
ChatGPT puts out words one at a time as if it’s typing. The system is designed to
make it look like there’s a person at the other end of it. That is deceptive. And that
is not right, frankly.
Suresh Venkatasubramanian
10. Third rule of AI hype
In the world of communications, AI is what corporate marketing
departments call artificial intelligence.
Corollary:
On Twitter, generations of life hacks experts have turned into prompt hacks
experts. The important thing is that there is: 1) a promise of increased
productivity / money 2) a thread 3) a final recap
11. Fourth rule of AI hype
Or of the evergreen Mechanical Turk.
Dig deep behind intelligence,
automation and the cloud and you'll
find live workers tagging data,
building filters, checking or reviewing
automated decisions while locked in
a closet somewhere.
(See also Potemkin AI - Jathan Sadowski)
12. Absorbing (and reinforcing) hegemonic view
“The first risks we consider are the risks that follow from the LMs absorbing
the hegemonic worldview from their training data. When humans produce
language, our utterances reflect our worldviews, including our biases [78, 79].
As people in positions of privilege with respect to a society’s racism, misogyny,
ableism, etc., tend to be overrepresented in training data for LMs (as
discussed in §4 above), this training data thus includes encoded biases, many
already recognized as harmful”.
From On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
https://dl.acm.org/doi/pdf/10.1145/3442188.3445922
14. We need more visibility starting from datasets
Despite rapid growth, the disciplines of data-driven decision making—including ML—have come under sustained criticism in
recent years due to their tendency to perpetuate and amplify social inequality [13, 44]. Data is frequently identified as a key
source of these failures through its role in “bias-laundering” [40, 51, 54, 119, 125]. For example, recent studies have
uncovered widespread prevalence of undesirable biases in ML datasets, such as the underrepresentation of
minoritized groups [27, 40, 131] and stereotype aligned correlations [28, 51, 72, 155]. Datasets also frequently reflect
historical patterns of social injustices, which can subsequently be reproduced by ML systems built from the data. (...)
The norms and standards of data collection within ML have themselves been subject to critique, with scholars identifying
insufficient documentation and transparency regarding processes of dataset construction [52, 53, 126], as well as
problematic consent practices [114]. The lack of accountability to datafied and surveilled populations as well as groups
impacted by data-driven decisions [32] has been further critiqued. Taken together, these objections around data raise a
serious challenge to the justifiability of the datasets used by many ML applications. How can AI systems be trusted when
the processes that generate the their development data are so poorly understood? If ML is to survive this crisis in a
responsible fashion, it must adopt visibility practices that enable accountability and responsibility throughout the
data development lifecycle.
Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastracture
15. What we need to address now
Automated mass surveillance (i.e., facial recognition)
Predictive policing (i.e., recidivsm risk)
Discrimininatory impact of AI systems
Reinforcing power unbalances, societal biases (or data biases) and stereotyping (even aestethics, i.e., Lensa)
Automation biases / Implicit biases
Targeted and more sophisticated disinformation and propaganda at scale (and the Liar’s Dividend)
Targeted and more sophisticated frauds
Overreliance (i.e, hallucinations)
Human labor exploitation / Data exploitation
Energy / resources consumption
Absence of explicability / Transparency / Accountability etc
16. Some resources for a different view:
Weapons of Math Destruction - Cathy O'Neil
Algorithms of Oppression - Safiya Noble
Artificial Unintelligence - Meredith Broussard
Design Justice - Sasha Costanza-Chock
Atlas of AI - Kate Crawford
Melanie Mitchell - Artificial Intelligence. A Guide for Thinking Humans
AI Now Institute
The DAIR Institute