Key Challenges in Moderating Social Media: Accuracy, Cost, Scalability, and Safety
1. MATT LEASE
Associate Professor
School of Information
The University of Texas at Austin
KEY CHALLENGES IN MODERATING
SOCIAL MEDIA: ACCURACY, COST,
SCALABILITY, & SAFETY
Lab: ir.ischool.utexas.edu
@mattlease
Slides: slideshare.net/mattlease
7. Content Moderation Challenges
• Internet scale (+ high cost and latency of manual human reviews)
• High accuracy requirements (high cost of mistakes)
• What is considered acceptable varies by platform & region (legal
& cultural), is continually evolving, and faces adversarial attacks
• Issues of free speech & due process in removal & remediation
7
17. 17
Content moderators work at a
Facebook office in Austin, Texas.
“A counselor in Austin, who is one of five on staff for
roughly 450 moderators spread across several
offices in the Texas capital, said the job could cause
a form of post-traumatic stress disorder known as
vicarious trauma.”
“Finding the right balance between content reviewer
well-being and resiliency, quality, and productivity
[and responsiveness] is very challenging at the
scale we operate in.” ~ Facebook
18. 18
“…so many people have written to me just to say that
they didn't know that human beings were actually
doing this work. They assumed it was all automated.”
19. The Great Irony
19
The sort of task we most want AI to do
(emotionally disturbing) is what people
are doing because AI isn’t good enough
20. • Improve accuracy of AI prediction models
• Develop effective human-in-the-loop systems
• Design HCI methods for safe & accurate work
• Promote social justice for human moderators
20
Information Retrieval &
Crowdsourcing Lab
http://ir.ischool.utexas.edu
21. with An Thanh Nguyen (UT), Byron Wallace (Northeastern), & more!
Believe it or not: Designing a
Human-AI Partnership for Mixed-
Initiative Fact-Checking
21
Lab: ir.ischool.utexas.edu
@mattlease
Slides: slideshare.net/mattlease
23. Anubrata Das, Brandon Dang and Matthew Lease
School of Information
The University of Texas at Austin
Fast, Accurate, and Healthier:
Interactive Blurring Helps Moderators
Reduce Exposure to Harmful Content
23
Lab: ir.ischool.utexas.edu
@mattlease
Slides: slideshare.net/mattlease
24. Research Question
24
By revealing less of an image, can we reduce the emotional
labor of image moderation without compromising
moderator accuracy and efficiency?
25. Design and Demo
http://ir.ischool.utexas.edu/CM/demo/
25
Dang, Brandon, Martin J. Riedl, and Matthew Lease. "But who protects the moderators? the case of crowdsourced image
moderation." arXiv preprint arXiv:1804.10999 (2018).
Code: https://github.com/budang/content-moderation
29. The Myth of Automation
Computer systems often embody
hidden human labor
• Gray and Suri (2019) “ghost work”
• Ekbia and Nardi (2014) ”heteromation”
• Irani and Silberman (2013) “invisible work”
29
31. 31
As the coronavirus pandemic swept the world, social media giants like Facebook,
Google and Twitter did what other companies did. They sent workers home —
including the tens of thousands of people tasked with sifting through mountains of
online material and weeding out hateful, illegal and sexually explicit content.
The COVID-driven experiment represented a real-world baptism of fire for something
social media companies have long dreamed of: using machine-learning tools and
artificial intelligence — not humans — to police posts on their platforms.
In their place, the companies turned to algorithms to do the job. It did not go well
35. Health Effects for Moderators
“The psychological effects of viewing harmful content
is well documented, with reports of moderators
experiencing posttraumatic stress disorder (PTSD)
symptoms and other mental health issues...”
(Cambridge Consultants, 2019)
35
“…many other employees develop long-lasting mental
health symptoms that stop short of full-blown PTSD,
including depression, anxiety, and insomnia.”
(Casey Newton, 2020)
Image Source: The Verge