Frontiers of
Computational Journalism
Columbia Journalism School
Week 3: Information Filter Design
September 26, 2016
This class
• The need for information filtering
• Filtering algorithms
• Human-machine filters
• Filter bubbles and other problems
• The filter design problem
The Need for Filtering
More video on YouTube than produced by TV networks during
entire 20th century.
10,000 legally-required reports filed by U.S. public
companies every day
Each day, the Associated Press publishes:
~10,000 text stories
~3,000 photographs
~500 videos
+ radio, interactive…
Comment Ranking
Comment voting
Problem: putting comments with most votes at top doesn’t work.
Why?
Old reddit comment ranking
“Hot” algorithm.
Up – down votes plus time
decay
Reddit Comment Ranking (new)
Hypothetically, suppose all users voted on the comment, and v out of N
up-voted. Then we could sort by proportion p = v/N of upvotes.
N=16
v = 11
p = 11/16 = 0.6875
Reddit Comment Ranking
Actually, only n users out of N vote, giving an observed approximate
proportion p’ = v’/n
n=3
v’ = 1
p’ = 1/3 = 0.333
Reddit Comment Ranking
Limited sampling can rank votes wrong when we don’t have enough
data.
p’ = 0.333
p = 0.6875
p’ = 0.75
p = 0.1875
Confidence interval
1-𝛼 probability that the true value p will lie within the central
region (when sampled assuming p=p’)
Rank comments by lower bound
of confidence interval
p’ = observed proportion of upvotes
n = how many people voted
zα= how certain do we want to be before we assume that p’ is “close” to
true p
Analytic solution for confidence interval, known as “Wilson score”
How not to sort by average rating, Evan Miller
User-item Recommendation
User-item matrix
Stores “rating” of each user for each item. Could also be
binary variable that says whether user clicked, liked,
starred, shared, purchased...
User-item matrix
• No content analysis. We know nothing about what is “in” each item.
• Typically very sparse – a user hasn’t watched even 1% of all
movies.
• Filtering problem is guessing “unknown” entry in matrix. High
guessed values are things user would want to see.
Filtering process
Item-Based Collaborative Filtering Recommendation Algorithms, Sarwar et al
How to guess unknown rating?
Basic idea: suggest “similar” items.
Similar items are rated in a similar way by many different users.
Remember, “rating” could be a click, a like, a purchase.
o “Users who bought A also bought B...”
o “Users who clicked A also clicked B...”
o “Users who shared A also shared B...”
Similar items
Item-Based Collaborative Filtering Recommendation Algorithms, Sarwar et al
Item similarity
Cosine similarity!
Other distance measures
“adjusted cosine similarity”
Subtracts average rating for each user, to compensate for general
enthusiasm (“most movies suck” vs. “most movies are great”)
Generating a recommendation
Weighted average of item ratings by their similarity.
Matrix factorization recommender
Matrix factorization recommender
Note: only sum over observed ratings rij.
Matrix factorization plate model
r
v
u
user rating
of item
variation in
user topics
λu
λv
variation in
item topics
topics for user
topics for item
i users
j items
New York Times recommender
Different Filtering Systems
Content:
Newsblaster analyzes the topics in the documents.
No concept of users.
Social:
What I see on Twitter determined by who I follow.
Reddit comments filtered by votes as input.
Amazon "people who bought X also bought Y” - no content analysis.
Hybrid:
Recommend based both on content and user behavior.
Combining collaborative filtering
and topic modeling
Collaborative Topic Modeling for Recommending Scientific Articles, Wang and Blei
K topics
topic for word word in doc
topics in doc
topic
concentration
parameter
word
concentration
parameter
Content modeling - LDA
D docs
words in topics
N words
in doc
K topicstopic for word word in doctopics in doc
(content)
topic
concentration
weight of user
selections
variation in
per-user topics
topics for user
user rating
of doctopics in doc
(collaborative)
Collaborative Topic Modeling
content only
content +
social
Filtering News on Twitter
Reuters News Tracer
Filter
Cluster into
events
Searches
and Alerts
Score
veracity &
newsworthy
Liu et. al, Reuters Tracer: A Large Scale System of Detecting &
Verifying Real-Time News Events from Twitter
Liu et. al, Reuters Tracer: A Large Scale System of Detecting &
Verifying Real-Time News Events from Twitter
Liu et. al, Reuters Tracer: A Large Scale System of Detecting &
Verifying Real-Time News Events from Twitter
Human-Machine Filters
TechMeme / MediaGazer
Facebook trending (with editors)
Facebook trending (without editors)
Facebook “trending review tool” screenshot from leaked documents
Approve or Reject: Can You Moderate Five New York Times Comments?
Revealed: Facebook's internal rulebook on sex, terrorism and violence, The Guardian
Facebook’s “Community Standards” document
Filter bubbles and other problems
Graph of political book sales during 2008 U.S. election, by orgnet.org
From Amazon "users who bought X also bought Y" data.
Retweet network of political tweets.
Political Polarization on Twitter, Conover, et. al.,
Instagram co-tag graph, highlighting three distinct topical communities: 1) pro-Israeli
(Orange), 2) pro-Palestinian (Yellow), and 3) Religious / muslim (Purple)
Gilad Lotan, Betaworks
The Filter Bubble
What people care about politically, and what they’re motivated to do something
about, is a function of what they know about and what they see in their media.
... People see something about the deficit on the news, and they say, ‘Oh, the
deficit is the big problem.’ If they see something about the environment, they
say the environment is a big problem.
This creates this kind of a feedback loop in which your media influences your
preferences and your choices; your choices influence your media; and you
really can go down a long and narrow path, rather than actually seeing the
whole set of issues in front of us.
- Eli Pariser,
How do we recreate a front-page ethos for a digital world?
Are filters causing our bubbles?
Increasing U.S. polarization predates Internet by decades.
Is the Internet Causing Political Polarization? Evidence from Demographics
Boxell, Gentzkow, Shapiro
Polarization increasing fastest
among those who are online the least
Exposure to Diverse Information on Facebook,
Eytan Bakshy, Lada Adamic, Solomon Messing
Will you see diverse content vs. will you click it?
Filter Design
Item Content My Data Other Users’ Data
Text analysis,
topic modeling,
clustering...
who I follow
what I’ve read/liked
social network
structure,
other users’ likes
Filter design problem
Formally, given
U = user preferences, history, characteristics
S = current story
{P} = results of function on previous stories
{B} = background world knowledge (other users?)
Define
r(S,U,{P},{B}) in [0...1]
relevance of story S to user U
Filter design problem, restated
When should a user see a story?
Aspects to this question:
normative
personal: what I want
societal: emergent group effects
UI
how do I tell the computer I want?
technical
constrained by algorithmic possibility
economic
cheap enough to deploy widely
“Conversational health”
Measuring the health of our public conversations, Cortico.ai
Exposure diversity as a design principle for recommender systems, Natali Helberger
How to evaluate/optimize?
How to evaluate/optimize?
• Netflix: try to predict the rating that the user gives a movie
after watching it.
• Amazon: sell more stuff.
• Google, Facebook: human raters A/B test every change (but
what do they optimize for?)
• Does the user understand how the filter works?
• Can they configure it as desired?
• Controls for abuse and harassment
• Can it be gamed? Spam, "user-generated censorship," etc.
How to evaluate/optimize?
Information diet
The holy grail in this model, as far as I’m
concerned, would be a Firefox plugin that would
passively watch your websurfing behavior and
characterize your personal information
consumption. Over the course of a week, it might
let you know that you hadn’t encountered any
news about Latin America, or remind you that a full
40% of the pages you read had to do with Sarah
Palin. It wouldn’t necessarily prescribe changes in
your behavior, simply help you monitor your own
consumption in the hopes that you might make
changes.
- Ethan Zuckerman,
Playing the Internet with PMOG

Frontiers of Computational Journalism week 3 - Information Filter Design

  • 1.
    Frontiers of Computational Journalism ColumbiaJournalism School Week 3: Information Filter Design September 26, 2016
  • 2.
    This class • Theneed for information filtering • Filtering algorithms • Human-machine filters • Filter bubbles and other problems • The filter design problem
  • 3.
    The Need forFiltering
  • 7.
    More video onYouTube than produced by TV networks during entire 20th century.
  • 8.
    10,000 legally-required reportsfiled by U.S. public companies every day
  • 9.
    Each day, theAssociated Press publishes: ~10,000 text stories ~3,000 photographs ~500 videos + radio, interactive…
  • 10.
  • 11.
    Comment voting Problem: puttingcomments with most votes at top doesn’t work. Why?
  • 12.
    Old reddit commentranking “Hot” algorithm. Up – down votes plus time decay
  • 13.
    Reddit Comment Ranking(new) Hypothetically, suppose all users voted on the comment, and v out of N up-voted. Then we could sort by proportion p = v/N of upvotes. N=16 v = 11 p = 11/16 = 0.6875
  • 14.
    Reddit Comment Ranking Actually,only n users out of N vote, giving an observed approximate proportion p’ = v’/n n=3 v’ = 1 p’ = 1/3 = 0.333
  • 15.
    Reddit Comment Ranking Limitedsampling can rank votes wrong when we don’t have enough data. p’ = 0.333 p = 0.6875 p’ = 0.75 p = 0.1875
  • 16.
    Confidence interval 1-𝛼 probabilitythat the true value p will lie within the central region (when sampled assuming p=p’)
  • 17.
    Rank comments bylower bound of confidence interval p’ = observed proportion of upvotes n = how many people voted zα= how certain do we want to be before we assume that p’ is “close” to true p Analytic solution for confidence interval, known as “Wilson score” How not to sort by average rating, Evan Miller
  • 18.
  • 19.
    User-item matrix Stores “rating”of each user for each item. Could also be binary variable that says whether user clicked, liked, starred, shared, purchased...
  • 20.
    User-item matrix • Nocontent analysis. We know nothing about what is “in” each item. • Typically very sparse – a user hasn’t watched even 1% of all movies. • Filtering problem is guessing “unknown” entry in matrix. High guessed values are things user would want to see.
  • 21.
    Filtering process Item-Based CollaborativeFiltering Recommendation Algorithms, Sarwar et al
  • 22.
    How to guessunknown rating? Basic idea: suggest “similar” items. Similar items are rated in a similar way by many different users. Remember, “rating” could be a click, a like, a purchase. o “Users who bought A also bought B...” o “Users who clicked A also clicked B...” o “Users who shared A also shared B...”
  • 23.
    Similar items Item-Based CollaborativeFiltering Recommendation Algorithms, Sarwar et al
  • 24.
  • 25.
    Other distance measures “adjustedcosine similarity” Subtracts average rating for each user, to compensate for general enthusiasm (“most movies suck” vs. “most movies are great”)
  • 26.
    Generating a recommendation Weightedaverage of item ratings by their similarity.
  • 27.
  • 28.
    Matrix factorization recommender Note:only sum over observed ratings rij.
  • 29.
    Matrix factorization platemodel r v u user rating of item variation in user topics λu λv variation in item topics topics for user topics for item i users j items
  • 30.
    New York Timesrecommender
  • 31.
    Different Filtering Systems Content: Newsblasteranalyzes the topics in the documents. No concept of users. Social: What I see on Twitter determined by who I follow. Reddit comments filtered by votes as input. Amazon "people who bought X also bought Y” - no content analysis. Hybrid: Recommend based both on content and user behavior.
  • 32.
    Combining collaborative filtering andtopic modeling Collaborative Topic Modeling for Recommending Scientific Articles, Wang and Blei
  • 33.
    K topics topic forword word in doc topics in doc topic concentration parameter word concentration parameter Content modeling - LDA D docs words in topics N words in doc
  • 34.
    K topicstopic forword word in doctopics in doc (content) topic concentration weight of user selections variation in per-user topics topics for user user rating of doctopics in doc (collaborative) Collaborative Topic Modeling
  • 35.
  • 36.
  • 40.
    Reuters News Tracer Filter Clusterinto events Searches and Alerts Score veracity & newsworthy
  • 41.
    Liu et. al,Reuters Tracer: A Large Scale System of Detecting & Verifying Real-Time News Events from Twitter
  • 42.
    Liu et. al,Reuters Tracer: A Large Scale System of Detecting & Verifying Real-Time News Events from Twitter
  • 43.
    Liu et. al,Reuters Tracer: A Large Scale System of Detecting & Verifying Real-Time News Events from Twitter
  • 44.
  • 45.
  • 46.
  • 47.
  • 48.
    Facebook “trending reviewtool” screenshot from leaked documents
  • 49.
    Approve or Reject:Can You Moderate Five New York Times Comments?
  • 50.
    Revealed: Facebook's internalrulebook on sex, terrorism and violence, The Guardian
  • 51.
  • 52.
    Filter bubbles andother problems
  • 53.
    Graph of politicalbook sales during 2008 U.S. election, by orgnet.org From Amazon "users who bought X also bought Y" data.
  • 54.
    Retweet network ofpolitical tweets. Political Polarization on Twitter, Conover, et. al.,
  • 55.
    Instagram co-tag graph,highlighting three distinct topical communities: 1) pro-Israeli (Orange), 2) pro-Palestinian (Yellow), and 3) Religious / muslim (Purple) Gilad Lotan, Betaworks
  • 56.
    The Filter Bubble Whatpeople care about politically, and what they’re motivated to do something about, is a function of what they know about and what they see in their media. ... People see something about the deficit on the news, and they say, ‘Oh, the deficit is the big problem.’ If they see something about the environment, they say the environment is a big problem. This creates this kind of a feedback loop in which your media influences your preferences and your choices; your choices influence your media; and you really can go down a long and narrow path, rather than actually seeing the whole set of issues in front of us. - Eli Pariser, How do we recreate a front-page ethos for a digital world?
  • 57.
    Are filters causingour bubbles? Increasing U.S. polarization predates Internet by decades.
  • 58.
    Is the InternetCausing Political Polarization? Evidence from Demographics Boxell, Gentzkow, Shapiro Polarization increasing fastest among those who are online the least
  • 59.
    Exposure to DiverseInformation on Facebook, Eytan Bakshy, Lada Adamic, Solomon Messing Will you see diverse content vs. will you click it?
  • 60.
  • 61.
    Item Content MyData Other Users’ Data Text analysis, topic modeling, clustering... who I follow what I’ve read/liked social network structure, other users’ likes
  • 62.
    Filter design problem Formally,given U = user preferences, history, characteristics S = current story {P} = results of function on previous stories {B} = background world knowledge (other users?) Define r(S,U,{P},{B}) in [0...1] relevance of story S to user U
  • 63.
    Filter design problem,restated When should a user see a story? Aspects to this question: normative personal: what I want societal: emergent group effects UI how do I tell the computer I want? technical constrained by algorithmic possibility economic cheap enough to deploy widely
  • 64.
    “Conversational health” Measuring thehealth of our public conversations, Cortico.ai
  • 65.
    Exposure diversity asa design principle for recommender systems, Natali Helberger
  • 66.
  • 67.
    How to evaluate/optimize? •Netflix: try to predict the rating that the user gives a movie after watching it. • Amazon: sell more stuff. • Google, Facebook: human raters A/B test every change (but what do they optimize for?)
  • 68.
    • Does theuser understand how the filter works? • Can they configure it as desired? • Controls for abuse and harassment • Can it be gamed? Spam, "user-generated censorship," etc. How to evaluate/optimize?
  • 69.
    Information diet The holygrail in this model, as far as I’m concerned, would be a Firefox plugin that would passively watch your websurfing behavior and characterize your personal information consumption. Over the course of a week, it might let you know that you hadn’t encountered any news about Latin America, or remind you that a full 40% of the pages you read had to do with Sarah Palin. It wouldn’t necessarily prescribe changes in your behavior, simply help you monitor your own consumption in the hopes that you might make changes. - Ethan Zuckerman, Playing the Internet with PMOG

Editor's Notes

  • #2 To open: https://code.fb.com/core-data/recommending-items-to-more-than-a-billion-people/ NY comment quiz (in incognito) http://www.nytimes.com/interactive/2016/09/20/insider/approve-or-reject-moderation-quiz.html?_r=0
  • #5 Editors are filters
  • #6 Editors are filters
  • #7 https://www.businessinsider.com/facebook-news-feed-is-flawed-2016-5
  • #8 http://www.tubefilter.com/2014/12/01/youtube-300-hours-video-per-minute/
  • #13 https://medium.com/hacking-and-gonzo/how-reddit-ranking-algorithms-work-ef111e33d0d9
  • #17 See also http://jakevdp.github.io/blog/2014/06/12/frequentism-and-bayesianism-3-confidence-credibility/
  • #18 http://www.evanmiller.org/how-not-to-sort-by-average-rating.html
  • #28 http://www.cs.columbia.edu/~blei/papers/WangBlei2011.pdf
  • #29 http://www.cs.columbia.edu/~blei/papers/WangBlei2011.pdf
  • #33 http://www.cs.columbia.edu/~blei/papers/WangBlei2011.pdf
  • #34 http://www.cs.columbia.edu/~blei/papers/WangBlei2011.pdf
  • #35 http://www.cs.columbia.edu/~blei/papers/WangBlei2011.pdf
  • #36 http://open.blogs.nytimes.com/2015/08/11/building-the-next-new-york-times-recommendation-engine/?_r=0
  • #39 Reuters News Tracer
  • #40 https://www.researchgate.net/publication/309471330_Reuters_Tracer_A_Large_Scale_System_of_Detecting_Verifying_Real-Time_News_Events_from_Twitter
  • #42 https://www.researchgate.net/publication/309471330_Reuters_Tracer_A_Large_Scale_System_of_Detecting_Verifying_Real-Time_News_Events_from_Twitter
  • #43 https://www.researchgate.net/publication/309471330_Reuters_Tracer_A_Large_Scale_System_of_Detecting_Verifying_Real-Time_News_Events_from_Twitter
  • #44 https://www.researchgate.net/publication/309471330_Reuters_Tracer_A_Large_Scale_System_of_Detecting_Verifying_Real-Time_News_Events_from_Twitter
  • #46 http://news.techmeme.com/081203/automated
  • #48 https://www.theguardian.com/technology/2016/may/12/facebook-trending-news-leaked-documents-editor-guidelines
  • #49 https://assets.documentcloud.org/documents/2830513/Facebook-Trending-Review-Guidelines.pdf
  • #50 http://www.nytimes.com/interactive/2016/09/20/insider/approve-or-reject-moderation-quiz.html?_r=0
  • #51 https://www.theguardian.com/news/2017/may/21/revealed-facebook-internal-rulebook-sex-terrorism-violence
  • #52 https://www.facebook.com/communitystandards/introduction/
  • #58 https://www.vox.com/cards/congressional-dysfunction/what-is-political-polarization
  • #59 http://www.nber.org/papers/w23258
  • #60 https://research.fb.com/exposure-to-diverse-information-on-facebook-2/
  • #65 https://www.cortico.ai/blog/2018/2/29/public-sphere-health-indicators
  • #66 Photo from Munich algorithmic news conference