Managing Crowdsourced Human ComputationManaging Crowdsourced Human Computation        Panos Ipeirotis, New York University...
Outline•   Introduction: Human computation and crowdsourcing•   Managing quality for simple tasks•   Complex tasks using w...
Human Computation, Round 1            • Humans were the first               “computers,” used for                computers...
Human Computation, Round 1            • Humans were the first               “computers,” used for               math compu...
Human Computation, Round 1            • Organized computation:               – Maskelyne, astronomical almanac            ...
Human Computation, Round 1            • Organized computation:              –   Clairaut, astronomy, 1758              –  ...
Human Computation, Round 1            • Patterns emerging              Patterns emerging              – Division of labor ...
Human Computation, Round 2            • Now we need humans               again for the “AI‐complete”                  i f ...
Focus of the tutorial        Focus of the tutorial Examine cases where humans interact with  Examine cases where humans in...
Crowdsourcing and human computation• Crowdsourcing: From macro to micro              g  –   Netflix, Innocentive  –   Quir...
Micro‐Crowdsourcing Example:       Labeling Images       L b li I    using the ESP Game                         Luis von A...
PLAYER 1           PLAYER 2    GUESSING: CAR      GUESSING: BOY    GUESSING: HAT      GUESSING: CARGUESSING: KID       SUC...
Paid Crowdsourcing: Amazon Mechanical Turk
Demographics of MTurk workers                http://bit.ly/mturk‐demographics                http://bit.ly/mturk demograph...
Demographics of MTurk workers     http://bit.ly/mturk‐demographics
Demographics of MTurk workers     http://bit.ly/mturk‐demographics     http://bit.ly/mturk demographics
Outline•   Introduction: Human computation and crowdsourcing•   Managing quality for simple tasks•   Complex tasks using w...
Managing quality for simple tasks    Managing quality for simple tasks• Quality through redundancy: Combining votes  Quali...
Example: Build an “Adult Web Site” Classifier• Need a large number of hand‐labeled sites• G t  Get people to look at sites...
22
Example: Build an “Adult Web Site” Classifier • Need a large number of hand‐labeled sites • G t   Get people to look at si...
Example: Build an “Adult Web Site” Classifier • Need a large number of hand‐labeled sites • G t   Get people to look at si...
Bad news: Spammers!             Worker ATAMRO447HWJQlabeled X (porn) sites as G (general audience)
Majority Voting and Label Quality     Ask multiple labelers, keep majority label as “true” label     Quality is probabil...
What if qualities of workers are different?                                 3 workers, qualities: p‐d, p, p+d             ...
Combining votes with different quality               Clemen and Winkler, 1990
What happens if we have dependencies?                           Clemen and Winkler, 1985   Positive dependencies decrease...
What happens if we have dependencies?                                                               Yule’s Q              ...
Vote combination: Meta studies  Vote combination: Meta‐studies• Simple averages tend to work well  Simple averages tend to...
From aggregate labels to worker quality   Look at our spammer friend ATAMRO447HWJQ        h     ih h 9        k   together...
Algorithm of Dawid & Skene, 1979     Iterative process to estimate worker error rates1. Initialize by aggregating labels f...
And many variations…            And many variations…• van der Linden et al, 1997: Item‐Response Theory   a de       de et ...
Challenge: From Confusion Matrixes to Quality Scores  All the algorithms will generate “confusion matrixes” for workers   ...
Challenge 1:            Spammers are lazy and smart!           Spammers are lazy and smart!Confusion matrix for spammer   ...
Challenge 2:                      Humans are biased!                     Humans are biased!Error rates for CEO of AdSafe  ...
Solution: Reverse errors first, compute                           f       d              error rate afterwardsError Rates ...
Solution: Reverse errors first, compute                           f       d              error rate afterwardsError Rates ...
Quality Scores• High cost when “soft” labels have probability spread across classes• Low cost when “soft” labels have prob...
Quality Score• A spammer is a worker who always assigns labels randomly,   regardless of what the true class is.       Qua...
What about Gold testing? Naturally integrated into the latent class model1.                                               ...
•   3 labels per example                                                    •   2 categories, 50/50     Gold Testing     G...
•   5 labels per example                                       •   2 categories, 50/50Gold TestingGold Testing            ...
•   10 labels per example                                       •   2 categories, 50/50Gold TestingGold Testing           ...
•   10 labels per example                                  •   2 categories, 90/10Gold TestingGold Testing                ...
•   5 labels per example                                  •   2 categories, 50/50Gold TestingGold Testing                 ...
•   10 labels per example                                      •   2 categories, 90/10Gold Testing?Gold Testing?          ...
Testing workers              Testing workers• An exploration‐exploitation scheme:  An exploration exploitation  – Explore:...
Testing workers                     Testing workers• An exploration‐exploitation scheme:  An exploration exploitation  – A...
Example: Build an “Adult Web Site” Classifier     Example: Build an  Adult Web Site     Get people to look at sites and cl...
Integrating with Machine LearningIntegrating with Machine Learning• Crowdsourcing is cheap but not free  Crowdsourcing is ...
Simple solution• Humans label training data  Humans label training data• Use training data to build model                 ...
Quality and Classification Performance       Noisy labels lead to degraded task performance       Labeling quality increas...
Tradeoffs for Machine Learning Models • Get more data  Improve model accuracy • Improve data quality  Improve classifica...
Tradeoffs for Machine Learning Models • Get more data: Active Learning, select which    unlabeled example to label [Settle...
Scaling Crowdsourcing: Iterative training• Use model when confident, humans otherwise• Retrain with new human input → impr...
Rule of Thumb Results            Rule of Thumb Results• With high quality labelers (80% and above): One          g q      ...
Dawid & Skene meets a Classifier        & Skene meets a Classifier• [Raykar et al. JMLR 2010]: Use the          et al. JML...
Selective Repeated Labeling         Selective Repeated‐Labeling• We do not need to label everything same number of times• ...
Label Uncertainty: Focus on uncertainty• If we know worker qualities, we can estimate log‐odds for each                   ...
+ +       ‐ ‐‐ ‐                                                            + +      + +                                  ...
Adult content classification                                   Round RobinSelective labeling                       63
Too much theory?                 Too much theory?              Open source implementation available at:               p   ...
Learning from imperfect data      Learning from imperfect data                                               100• With inh...
Outline•   Introduction: Human computation and crowdsourcing•   Managing quality for simple tasks•   Complex tasks using w...
How to handle free‐form answers?• Q: “My task does not have discrete answers….”• A: Break into two HITs:    – “C t ” HIT  ...
But my free‐form is                                        Describe this    j      g          gnot just right or wrong…• “...
version 1:    A parial view of a pocket calculator together with     some coins and a pen.version 2:version 2:     A view ...
Independence or Not?                    Independence or Not?    • Building iteratively (lack of independent) allows better...
Independence or Not?    • But lack of independence      But lack of independence       may cause high       dependence on ...
Independence or Not?                              Collective Problem Solving    • Exploration / exploitation tradeoff     ...
Individual search strategy affects group successIndividual search strategy affects group success                         •...
The role of Communication NetworksThe role of Communication Networks• Examine various “neighbor” structures               ...
Network structure affects individual search strategy• Higher clustering                 Higher probability of   neighbors...
Diffusion of Best SolutionDiffusion of Best Solution
Diffusion of Best SolutionDiffusion of Best Solution
Diffusion of Best SolutionDiffusion of Best Solution
Diffusion of Best SolutionDiffusion of Best Solution
Diffusion of Best SolutionDiffusion of Best Solution
Diffusion of Best SolutionDiffusion of Best Solution
Diffusion of Best SolutionDiffusion of Best Solution
Diffusion of Best SolutionDiffusion of Best Solution
Individual search strategy affects group successIndividual search strategy affects group success• No significant  No signi...
Network structure affects group successNetwork structure affects group success
TurKontrol: Decision Theoretic ModelingTurKontrol: Decision‐Theoretic Modeling• Optimizing workflow execution using decisi...
http://www.workflowpatterns.com         Common Workflow Patterns         Common Workflow PatternsBasic Control FlowBasic C...
Soylent• Word processor with crowd embedded [Bernstein et al, UIST 2010]• “Proofread paper”: Ask workers to proofread each...
Find     “Identify at least one area         t at can         that ca be s o te ed                       shortened        ...
Crowd‐created Workflows: CrowdForge• Map‐Reduce framework for crowds [Kittur et al, CHI    2011]                     – Ide...
Crowd‐created Workflows: TurkoMatic• Crowd creates workflows• Turkomatic [Kalkani et al, CHI 2011]:   1. Ask workers to de...
Crowdsourcing Patterns        Crowdsourcing Patterns• Generate / Create   i d• Find                   Creation            ...
Outline•   Introduction: Human computation and crowdsourcing•   Managing quality for simple tasks•   Complex tasks using w...
Defining Task Parameters      Defining Task ParametersThree main goals:Three main goals:• Minimize Cost (cheap)   i i i C ...
Effect of Payment: Quality        Effect of Payment: Quality• Cost does not affect quality [Mason and Watts, 2009, AdSafe]...
Effect of Payment: #Tasks      Effect of Payment: #Tasks• Payment incentives increase speed, though               [Mason a...
Predicting Completion Time• Model timing of individual task   [Yan, Kumar, Ganesan, 2010]   – Assume rate of task completi...
Prediction Completion Time    Prediction Completion Time• For Freebase, workers use log‐normal time to   complete a task [...
Predicting Completion Time• Exponential assumption usually not realistic• H       il d di ib i  Heavy‐tailed distribution ...
Effect of #HITs: Monotonic, but sublinear                     h(t) = 0.998^#HITs•   10 HITs  2% slower than 1 HIT•   100 ...
HIT Topicstopic 1 : cw castingwords  podcast  transcribe  english  mp3  edit  confirm  snippet  gradetopic 2:  d   i 2 dat...
Effect of Topic: The CastingWords Effect      topic 1 : cw castingwords  podcast  transcribe  english  mp3  edit  confirm ...
Effect of Topic: Surveys=fast (even with redundancy!)       topic 1 : cw castingwords  podcast  transcribe  english  mp3  ...
Effect of Topic: Writing takes time     topic 1 : cw castingwords  podcast  transcribe  english  mp3  edit  confirm  snipp...
Optimizing Completion Time    Optimizing Completion Time• Workers pick tasks that have large number of  Workers pick tasks...
Optimizing Completion Time     Optimizing Completion Time• Completion rate varies with  Completion rate varies with   time...
Other Optimizations                  Other Optimizations• Qurk [Markus et al., CIDR 2011] and CrowdDB [Franklin et al., SI...
Outline•   Introduction: Human computation and crowdsourcing•   Managing quality for simple tasks•   Complex tasks using w...
Incentives• Monetary• Self‐serving• Altruistic   l i i
Incentives: Money                    Incentives: Money• Money does not improve quality but (generally)  Money does not imp...
Incentives: Money and Trouble     Incentives: Money and Trouble• Careful: Paying a little often worse than paying         ...
Incentives• Monetary• Self‐serving• Altruistic   l i i
Incentives: Leaderboards         Incentives: Leaderboards• Leaderboards (“top participants”) frequent               ( top ...
Incentives: Purpose of Work     Incentives: Purpose of Work• Contrafreeloading: Rats and animals prefer to  Contrafreeload...
Incentives: Purpose of Work           Incentives: Purpose of Work• Workers enjoy learning new skills (oft cited reason for...
Incentives: Credit and ParticipationIncentives: Credit and Participation• Public credit contributes to sense of  Public cr...
Incentives• Monetary• Self‐serving• Altruistic   l i i
Incentive: Altruism            Incentive: Altruism• Contributing back (tit for tat): Early reviewers  Contributing back (t...
Incentives: Altruism and Purpose  Incentives: Altruism and Purpose• On MTurk [Chandler and Kapelner 2010]  On MTurk [Chand...
Incentives: Fair share          Incentives: Fair share• Anecdote: Same HIT (spam classification)  Anecdote: Same HIT (spam...
Incentives: FUN!                    Incentives: FUN!• Game‐ify the task (design details later)• Examples       p   – ESP G...
Outline•   Introduction: Human computation and crowdsourcing•   Managing quality for simple tasks•   Complex tasks using w...
Market Design Organizes the CrowdMarket Design Organizes the Crowd• Reputation Mechanisms    eputat o   ec a s s   – Selle...
Lack of Reputation and Market for LemonsLack of Reputation and Market for Lemons• “When quality of sold good is uncertain ...
Lack of Reputation and Market for Lemons    Lack of Reputation and Market for Lemons• Market for lemons also on the employ...
Reputation systems            Reputation systems• Significant number of reputation mechanisms  Significant number of reput...
Challenges in the Design of Reputation Systems  • Insufficient participation                 p      p  • Overwhelmingly po...
Insufficient Participation      • Free‐riding: feedback constitutes a public good. Once available,         everyone can co...
Overwhelmingly Positive Feedback (I)         More than 99% of all feedbacks posted on eBay are positive.          H       ...
Overwhelmingly Positive Feedback (II)       “The sound of silence”: No news, bad news…       • [Dellarocas and Wood 2008] ...
Dishonest Reports       • “Ballot stuffing” (unfairly high ratings): a seller colludes with a          group of buyers in ...
Identity Changes        • “Cheap pseudonyms”: easy to disappear and re‐                 pp        y        y        pp    ...
Value Imbalance Exploitation         Three men attempt to sell a fake painting on eBay for $US          135,805. The sale ...
The Market for Positive Feedbacks        A selling strategy that eBay users are actually using the                  g     ...
Challenges for Crowdsourcing Markets (I)• Two‐sided opportunistic behavior   • Reciprocal systems worse than one‐side eval...
Challenges for Crowdsourcing Markets (II) • Constrained capacity of workers    • In e‐commerce markets sellers usually hav...
Market Design Organizes the CrowdMarket Design Organizes the Crowd• Reputation Mechanisms    eputat o   ec a s s   – Selle...
The Importance of Task Discovery  Heavy tailed distribution of completion times. Why?• Heavy tailed distribution of comple...
The Importance and Danger of PrioritiesThe Importance and Danger of Priorities• [Barabasi, Nature 2005] showed that human ...
The UI hurts the market!• Practitioners know that HITs in 3rd page and after,                                       p g   ...
Market Design Organizes the CrowdMarket Design Organizes the Crowd• Reputation Mechanisms    eputat o   ec a s s   – Selle...
Expert Search                         Expert Search• Find the best worker for a task (or within a task)  Find the best wor...
Directions for future research     Directions for future research• Optimize allocation of tasks to worker based on complet...
Outline•   Introduction: Human computation and crowdsourcing•   Managing quality for simple tasks•   Complex tasks using w...
Human Computation• Humans are not perfect mathematical models  Humans are not perfect mathematical models• They exhibit no...
Score the following from 1 to 10            1: not particularly bad or wrong           1    t    ti l l b d               ...
Score the following from 1 to 10             1: not particularly bad or wrong            1    t    ti l l b d             ...
Anchoring                       Anchoring• “Humans start with a first approximation (anchor) and                        f ...
Anchoring                    Anchoring• Write down the last digit of their social security                         g f    ...
Priming• Exposure to one stimulus influences another  Exposure to one stimulus influences another• Stereotypes:   – Asian ...
Exposure Effect               Exposure Effect• Familiarity leads to liking  Familiarity leads to liking...• [S  [Stone and...
Framing• Presenting the same option in different  Presenting the same option in different   formats leads to different for...
Framing:              600 people affected by deadly disease             600 people affected by deadly diseaseRoom 1a) save...
Very long list of cognitive biases… Very long list of cognitive biases…• http://en.wikipedia.org/wiki/List_of_cognitive_bi...
Outline•   Introduction: Human computation and crowdsourcing•   Managing quality for simple tasks•   Complex tasks using w...
Games with a Purpose       [Luis von Ahn and Laura Dabbish, CACM 2008]Three generic game structures• Output agreement:   –...
Output Agreement: ESP Game  Output Agreement: ESP Game• Players look at common input  Players look at common input• Need t...
Improvements• Game‐theoretic analysis indicates that players  Game theoretic analysis indicates that players   will conver...
Input Agreement: TagATune       p    g           g• Sometimes difficult to type identical output   (e.g., “describe this s...
Inversion Problem: Peekaboom    Inversion Problem: Peekaboom•   Non symmetric players    Non‐symmetric players•   Input: I...
HINT
HINT
HINT
HINT
BUSHHINT
Protein folding                 Protein folding• Protein folding: Proteins fold from long chains into   small balls, each ...
FoldIt Game• Humans are very good at reducing the search  Humans are very good at reducing the search   space• Humans try ...
Outline•   Introduction: Human computation and crowdsourcing•   Managing quality for simple tasks•   Complex tasks using w...
Case Study: FreebaseCase Study: FreebasePraveen Paritosh, Google
Crowdsourcing Case Study        AdSafe
177
178
179
A few of the tasks in the past• Detect pages that discuss swine flu  – Pharmaceutical firm had drug “treating” (off-label)...
Need to build models fast          Need to build models fast     • T diti       Traditionally, modeling teams have investe...
AdSafe workflow • Find URLs for a given topic (hate speech, gambling, alcohol    abuse, guns, bombs, celebrity gossip, etc...
Case Study: OCR and ReCAPTCHACase Study: OCR and ReCAPTCHA
Scaling Crowdsourcing: Use Machine Learning Need to scale crowdsourcing Basic idea: Build a machine learning model and u...
Scaling Crowdsourcing: Iterative training Ti  Triage:   – machine when confident   – humans when not confident Retrain u...
Scaling Crowdsourcing: Iterative training, with noise Machine when confident, humans otherwise Ask as many humans as nec...
Scaling Crowdsourcing: Iterative training, with noise Machine when confident, humans otherwise Ask as many humans as nec...
Managing Crowdsourced Human Computation: A Tutorial
Managing Crowdsourced Human Computation: A Tutorial
Managing Crowdsourced Human Computation: A Tutorial
Managing Crowdsourced Human Computation: A Tutorial
Managing Crowdsourced Human Computation: A Tutorial
Managing Crowdsourced Human Computation: A Tutorial
Managing Crowdsourced Human Computation: A Tutorial
Managing Crowdsourced Human Computation: A Tutorial
Managing Crowdsourced Human Computation: A Tutorial
Managing Crowdsourced Human Computation: A Tutorial
Managing Crowdsourced Human Computation: A Tutorial
Managing Crowdsourced Human Computation: A Tutorial
Managing Crowdsourced Human Computation: A Tutorial
Managing Crowdsourced Human Computation: A Tutorial
Managing Crowdsourced Human Computation: A Tutorial
Managing Crowdsourced Human Computation: A Tutorial
Managing Crowdsourced Human Computation: A Tutorial
Upcoming SlideShare
Loading in...5
×

Managing Crowdsourced Human Computation: A Tutorial

16,099

Published on

The slides from the tutorial presented during the WWW2011 conference in Hyderabad, India (March 29th 2011, by Panos Ipeirotis)

Published in: Technology

Managing Crowdsourced Human Computation: A Tutorial

  1. 1. Managing Crowdsourced Human ComputationManaging Crowdsourced Human Computation Panos Ipeirotis, New York University Slides from the WWW2011 tutorial, 29 March 2011 Slides from the WWW2011 tutorial 29 March 2011
  2. 2. Outline• Introduction: Human computation and crowdsourcing• Managing quality for simple tasks• Complex tasks using workflows• Task optimization• Incentivizing the crowd Incentivizing the crowd• Market design• Behavioral aspects and cognitive biases Behavioral aspects and cognitive biases• Game design• Case studies d
  3. 3. Human Computation, Round 1 • Humans were the first  “computers,” used for  computers, used for math computations Grier, When computers were human, 2005 Grier, IEEE Annals 1998
  4. 4. Human Computation, Round 1 • Humans were the first  “computers,” used for  math computations • Organized computation: – Clairaut, astronomy, 1758:  y Computed the Halley’s  comet orbit (three‐body  problem) dividing the  problem) dividing the labor of numeric  computations across 3  astronomers Grier, When computers were human, 2005 Grier, IEEE Annals 1998
  5. 5. Human Computation, Round 1 • Organized computation: – Maskelyne, astronomical almanac  with moon positions, used for  p , navigation, 1760. Quality  assurance by doing calculations  twice and compared by third  verifier. verifier – De Prony, 1794, hires hairdressers  (unemployed after French  (unemployed after French revolution; knew only addition  and subtraction) to create  logarithmic and trigonometric  tables. He managed the process  tables He managed the process by splitting the work into very  detailed workflows. (Hairdressers better  than mathematicians in arithmetic!) Grier, When computers were human, 2005 Grier, IEEE Annals 1998
  6. 6. Human Computation, Round 1 • Organized computation: – Clairaut, astronomy, 1758 – Maskelyne, 1760 – De Prony, log/trig tables, 1794 – Galton, biology, 1893 – Pearson, biology, 1899 – … – Cowles, stock market, 1929 – Math Tables Project, unskilled  Math Tables Project unskilled labor, 1938 Grier, When computers were human, 2005 Grier, IEEE Annals 1998
  7. 7. Human Computation, Round 1 • Patterns emerging Patterns emerging – Division of labor – Mass production Mass production – Professional managers • Then we got the  “automatic computers” 
  8. 8. Human Computation, Round 2 • Now we need humans  again for the “AI‐complete”  i f th “AI l t ” tasks – Tag images [ESP Game: voh Ahn and  Dabbish 2004, ImageNet] – Determine if page relevant Determine if page relevant  [Alonso et al., 2011] – Determine song genre – Check page for offensive Check page for offensive  content –… ImageNet: http://www.image‐net.org/about‐publication
  9. 9. Focus of the tutorial Focus of the tutorial Examine cases where humans interact with  Examine cases where humans interact withcomputers in order to solve a computational  problem (usually too hard to be solved by (usually too hard to be solved by  computers alone)
  10. 10. Crowdsourcing and human computation• Crowdsourcing: From macro to micro g – Netflix, Innocentive – Quirky, Threadless – oDesk, Guru, eLance, vWorker D k G L W k – Wikipedia et al. – ESP Game, FoldIt, Phylo, … S Ga e, o d t, y o, – Mechanical Turk, CloudCrowd, …• Crowdsourcing greatly facilitates human  computation (but they are not equivalent)
  11. 11. Micro‐Crowdsourcing Example: Labeling Images L b li I using the ESP Game Luis von Ahn MacArthur Fellowship "genius grant" • Two player online game • Partners don’t know each other and can’t  communicate • Object of the game: type the same word • The only thing in common is an image The only thing in common is an image
  12. 12. PLAYER 1 PLAYER 2 GUESSING: CAR GUESSING: BOY GUESSING: HAT GUESSING: CARGUESSING: KID SUCCESS! YOU AGREE ON CARSUCCESS!YOU AGREE ON CAR
  13. 13. Paid Crowdsourcing: Amazon Mechanical Turk
  14. 14. Demographics of MTurk workers http://bit.ly/mturk‐demographics http://bit.ly/mturk demographicsCountry of residenceCountry of residence• United States: 46.80%• India: 34.00%• Miscellaneous: 19.20%
  15. 15. Demographics of MTurk workers http://bit.ly/mturk‐demographics
  16. 16. Demographics of MTurk workers http://bit.ly/mturk‐demographics http://bit.ly/mturk demographics
  17. 17. Outline• Introduction: Human computation and crowdsourcing• Managing quality for simple tasks• Complex tasks using workflows• Task optimization• Incentivizing the crowd Incentivizing the crowd• Market design• Behavioral aspects and cognitive biases Behavioral aspects and cognitive biases• Game design• Case studies d
  18. 18. Managing quality for simple tasks Managing quality for simple tasks• Quality through redundancy: Combining votes Quality through redundancy: Combining votes – Majority vote – Quality adjusted vote Quality‐adjusted vote – Managing dependencies• Quality through gold data Q lit th h ld d t• Estimating worker quality (Redundancy + Gold)• Joint estimation of worker quality and difficulty• Active data collection Active data collection
  19. 19. Example: Build an “Adult Web Site” Classifier• Need a large number of hand‐labeled sites• G t Get people to look at sites and classify them as: l t l k t it d l if thG (general audience) PG (parental guidance)  R (restricted) X (porn)
  20. 20. 22
  21. 21. Example: Build an “Adult Web Site” Classifier • Need a large number of hand‐labeled sites • G t Get people to look at sites and classify them as: l t l k t it d l if th G (general audience) PG (parental guidance)  R (restricted) X (porn)Cost/Speed Statistics Undergrad intern: 200 websites/hr, cost: $15/hr
  22. 22. Example: Build an “Adult Web Site” Classifier • Need a large number of hand‐labeled sites • G t Get people to look at sites and classify them as: l t l k t it d l if th G (general audience) PG (parental guidance)  R (restricted) X (porn)Cost/Speed Statistics Undergrad intern: 200 websites/hr, cost: $15/hr MTurk: 2500 websites/hr, cost: $12/hr
  23. 23. Bad news: Spammers!  Worker ATAMRO447HWJQlabeled X (porn) sites as G (general audience)
  24. 24. Majority Voting and Label Quality  Ask multiple labelers, keep majority label as “true” label  Quality is probability of being correct 1 p=1.0 0.9 p=0.9 09 0.8 p=0.7 Quality for Majority Vote p=0.8 0.7 p=0.6 p is probability 0.6 of individual labeler of individual labeler p=0.5 p=0 5 M 0.5 being correct 0.4 p=0.4 0.3 0.2 1 3 5 7 9 11 13 Number of labelers Binary classification Binary classification 26Kuncheva et al., PA&A, 2003
  25. 25. What if qualities of workers are different? 3 workers, qualities: p‐d, p, p+d Region where majority better• Majority vote works best when workers have similar quality j p• Otherwise better to just pick the vote of the best worker• …or model worker qualities and combine [coming next]
  26. 26. Combining votes with different quality Clemen and Winkler, 1990
  27. 27. What happens if we have dependencies? Clemen and Winkler, 1985 Positive dependencies decrease the number of effective labelers Positive dependencies decrease the number of effective labelers
  28. 28. What happens if we have dependencies? Yule’s Q Y l ’ Q measure of correlation Kuncheva et al., PA&A, 2003 Positive dependencies decrease the number of effective labelers Positive dependencies decrease the number of effective labelers Negative dependencies can improve results (unlikely both workers  to be wrong at the same time)
  29. 29. Vote combination: Meta studies Vote combination: Meta‐studies• Simple averages tend to work well Simple averages tend to work well• C Complex models slightly better but less robust  l d l li h l b b l b [Clemen and Winkler, 1999, Ariely et al. 2000]
  30. 30. From aggregate labels to worker quality Look at our spammer friend ATAMRO447HWJQ h ih h 9 k together with other 9 workersAfter aggregation, we compute confusion matrix for each worker After majority vote, confusion matrix for ATAMRO447HWJQ P[G → G]=100% P[G → X]=0% P[X → G]=100% P[X → X]=0%
  31. 31. Algorithm of Dawid & Skene, 1979 Iterative process to estimate worker error rates1. Initialize by aggregating labels for each object (e.g., use majority vote)2. Estimate confusion matrix for workers (using aggregate labels) l b l )3. Estimate aggregate labels (using confusion matrix) • Keep labels for “gold data unchanged gold data”4. Go to Step 2 and iterate until convergence Confusion matrix for ATAMRO447HWJQ Our f i d O friend ATAMRO447HWJQ P[G → G]=99.947% P[G → X]=0.053% marked almost all sites as G. P[X → G]=99.153% P[X → X]=0.847% Seems like a spammer…
  32. 32. And many variations… And many variations…• van der Linden et al, 1997: Item‐Response Theory a de de et a , 99 : te espo se eo y• Uebersax, Biostatistics 1993: Ordered categories• Uebersax, JASA 1993: Ordered categories, with worker Uebersax, JASA 1993: Ordered categories, with worker  expertise and bias, item difficulty• Carpenter, 2008: Hierarchical Bayesian versions p , yAnd more recently at NIPS: y• Whitehill et al., 2009: Adding item difficulty• Welinder et al., 2010: Adding worker expertise et al., 2010: Adding worker expertise
  33. 33. Challenge: From Confusion Matrixes to Quality Scores All the algorithms will generate “confusion matrixes” for workers Confusion C f i matrix for ti f ATAMRO447HWJQ P[X → X]=0.847% P[X → G]=99.153% P[G → X]=0.053% P[G → G]=99.947% How to check if a worker is a spammer using the confusion matrix? g (hint: error rate not enough)
  34. 34. Challenge 1:  Spammers are lazy and smart! Spammers are lazy and smart!Confusion matrix for spammer Confusion matrix for good worker P[X → X]=0% P[X → G]=100% X] 0% G] 100%  P[X → X]=80% P[X → G]=20% P[G → X]=0% P[G → G]=100%  P[G → X]=20% P[G → G]=80% • Spammers figure out how to fly under the radar… • I In reality, we have 85% G sites and 15% X sites lit h 85% G it d 15% X it • Errors of spammer = 0% * 85% + 100% * 15% = 15% Errors of spammer 0%  85% + 100% 15% 15% • Error rate of good worker = 85% * 20% + 85% * 20% = 20% False negatives: Spam workers pass as legitimate
  35. 35. Challenge 2:  Humans are biased! Humans are biased!Error rates for CEO of AdSafe P[G → G]=20.0% P[G → P]=80.0% P[G → R]=0.0% P[G → X]=0.0% P[P → G]=0.0% P[P → P]=0.0% P[P → R]=100.0% P[P → X]=0.0% P[R → G]=0.0% P[R → P]=0.0% P[R → R]=100.0% P[R → X]=0.0% P[X → G] 0 0% G]=0.0% P[X → P] 0 0% P]=0.0% P[X → R] 0 0% R]=0.0% P[X → X] 100 0% X]=100.0% In reality, we have 85% G sites, 5% P sites, 5% R sites, 5% X sites Errors of spammer (all in G) = 0% * 85% + 100% * 15% = 15% Error rate of biased worker = 80% * 85% + 100% * 5% = 73%False positives: Legitimate workers appear to be spammers
  36. 36. Solution: Reverse errors first, compute  f d error rate afterwardsError Rates for CEO of AdSafe P[G → G]=20 0% G]=20.0% P[G → P]=80 0% P]=80.0% P[G → R]=0 0% R]=0.0% P[G → X]=0 0% X]=0.0% P[P → G]=0.0% P[P → P]=0.0% P[P → R]=100.0% P[P → X]=0.0% P[R → G]=0.0% P[R → P]=0.0% P[R → R]=100.0% P[R → X]=0.0% P[X → G]=0.0% [ ] P[X → P]=0.0% [ ] P[X → R]=0.0% [ ] P[X → X]=100.0% [ ] • When biased worker says G, it is 100% G • When biased worker says P, it is 100% G Wh bi d k P it i 100% G • When biased worker says R, it is 50% P, 50% R • When biased worker says X, it is 100% X Small ambiguity for “R‐rated” votes but other than that, fine!
  37. 37. Solution: Reverse errors first, compute  f d error rate afterwardsError Rates for spammer: ATAMRO447HWJQ P[G → G]=100.0% P[G → P]=0.0% P[G → R]=0.0% P[G → X]=0.0% P[P → G]=100.0% P[P → P]=0.0% P[P → R]=0.0% P[P → X]=0.0% P[R → G]=100.0% P[R → P]=0.0% P[R → R]=0.0% P[R → X]=0.0% P[X → G]=100 0% G]=100.0% P[X → P]=0 0% P]=0.0% P[X → R]=0 0% R]=0.0% P[X → X]=0 0% X]=0.0% • When spammer says G, it is 25% G, 25% P, 25% R, 25% X • When spammer says P, it is 25% G, 25% P, 25% R, 25% X • When spammer says R, it is 25% G, 25% P, 25% R, 25% X • When spammer says X, it is 25% G, 25% P, 25% R, 25% X [note: assume equal priors] The results are highly ambiguous. No information provided! h l h hl b f
  38. 38. Quality Scores• High cost when “soft” labels have probability spread across classes• Low cost when “soft” labels have probability mass concentrated in one class Assigned Label “Soft” Label Cost G <G: 25%, P: 25%, R: 25%, X: 25%> 0.75 G <G: 99%, P: 1%, R: 0%, X: 0%> 0.0198 [***Assume equal misclassification costs] Ipeirotis, Provost, Wang, HCOMP 2010
  39. 39. Quality Score• A spammer is a worker who always assigns labels randomly,  regardless of what the true class is. QualityScore = 1 ‐ ExpCost(Worker)/ExpCost(Spammer)• Q li S QualityScore i is useful for the purpose of blocking bad workers and  f lf h f bl ki b d k d rewarding good ones• Essentially a multi class cost sensitive AUC metric Essentially a multi‐class, cost‐sensitive AUC metric • AUC = area under the ROC curve
  40. 40. What about Gold testing? Naturally integrated into the latent class model1. (e.g.,1 Initialize by aggregating labels for each object (e g use majority vote)2. Estimate error rates for workers (using aggregate labels)3. Estimate aggregate labels (using error rates, weight worker votes according to quality) • Keep labels for “gold data” unchanged4 Go to Step 2 and iterate until convergence4.
  41. 41. • 3 labels per example • 2 categories, 50/50 Gold Testing Gold Testing • • Quality range: 0.55:0.05:1.0 200 labelers l b l No significant advantage under “good conditions”  (balanced datasets, good worker quality)http://bit.ly/gold‐or‐repeatedWang, Ipeirotis, Provost, WCBI 2011
  42. 42. • 5 labels per example • 2 categories, 50/50Gold TestingGold Testing • • Quality range: 0.55:1.0 200 labelers l b l No significant advantage under “good conditions”  (balanced datasets, good worker quality)
  43. 43. • 10 labels per example • 2 categories, 50/50Gold TestingGold Testing • • Quality range: 0.55:1.0 200 labelers l b l No significant advantage under “good conditions”  (balanced datasets, good worker quality)
  44. 44. • 10 labels per example • 2 categories, 90/10Gold TestingGold Testing • • Quality range: 0.55:0.1.0 l b l 200 labelers Advantage under imbalanced datasets
  45. 45. • 5 labels per example • 2 categories, 50/50Gold TestingGold Testing • • Quality range: 0.55:0.65 200 labelers l b l Advantage with bad worker quality
  46. 46. • 10 labels per example • 2 categories, 90/10Gold Testing?Gold Testing? • • Quality range: 0.55:0.65 200 labelers l b l Significant advantage under “bad conditions”  (imbalanced datasets, bad worker quality)
  47. 47. Testing workers Testing workers• An exploration‐exploitation scheme: An exploration exploitation – Explore: Learn about the quality of the workers – Exploit: Label new examples using the quality Exploit: Label new examples using the quality
  48. 48. Testing workers Testing workers• An exploration‐exploitation scheme: An exploration exploitation – Assign gold labels when benefit in learning better  quality of worker outweighs the loss for labeling a  quality of worker outweighs the loss for labeling a gold (known label) example [Wang et al, WCBI 2011] – Assign an already labeled example (by other Assign an already labeled example (by other  workers) and see if it agrees with majority [Donmez et  al., KDD 2009] – If worker quality changes over time, assume  f k l h accuracy given by HMM and φ(τ) = φ(τ‐1) + Δ  [ [Donmez et al., SDM 2010] , ]
  49. 49. Example: Build an “Adult Web Site” Classifier Example: Build an  Adult Web Site Get people to look at sites and classify them as: p p y G (general audience) PG (parental guidance)  R (restricted) X (porn)But we are not going to label the whole Internet…ExpensiveSlow
  50. 50. Integrating with Machine LearningIntegrating with Machine Learning• Crowdsourcing is cheap but not free Crowdsourcing is cheap but not free – Cannot scale to web without help• Solution: Build automatic classification models  using crowdsourced data
  51. 51. Simple solution• Humans label training data Humans label training data• Use training data to build model Data from existing crowdsourced answersNNew C Case Automatic Model Automatic (through machine learning) Answer
  52. 52. Quality and Classification Performance Noisy labels lead to degraded task performance Labeling quality increases  classification quality increases Labeling quality increases  classification quality increases Quality = 100% 100 Quality  80% Quality = 80% 90 80 AUC Quality = 60% 70 60 Quality = 50% 50 40 100 120 140 160 180 200 220 240 260 280 300 1 20 40 60 80 Number of examples ("Mushroom" data set) Single‐labeler quality  (p (probability of assigning  y g 54 g correctly a binary label)http://bit.ly/gold‐or‐repeatedSheng, Provost, Ipeirotis, KDD 2008
  53. 53. Tradeoffs for Machine Learning Models • Get more data  Improve model accuracy • Improve data quality  Improve classification p q y p Data Quality = 100% 100 Data Quality = 80% Q li 80% 90 80 uracy 70 Data Quality = 60% Q y Accu 60 50 Data Quality = 50% 40 0 0 0 0 0 0 0 0 0 0 0 1 20 40 60 80 10 12 14 16 18 20 22 24 26 28 30 Number of examples (Mushroom)55
  54. 54. Tradeoffs for Machine Learning Models • Get more data: Active Learning, select which  unlabeled example to label [Settles, http://active‐learning.net/] unlabeled example to label [S ttl htt // ti l i t/] • Impro e data q alit Improve data quality:  Repeated Labeling, label again an already labeled  example [Sheng et al. 2008, Ipeirotis et al, 2010] example [Sheng et al 2008 Ipeirotis et al 2010]56
  55. 55. Scaling Crowdsourcing: Iterative training• Use model when confident, humans otherwise• Retrain with new human input → improve Retrain with new human input → improve  model → reduce need for humans Automatic AnswerNew Case Automatic Model (through machine learning) Data from existing Get human(s) to answer crowdsourced answers
  56. 56. Rule of Thumb Results Rule of Thumb Results• With high quality labelers (80% and above): One  g q y ( ) worker per case (more data better)• With low quality labelers (~60%) Multiple workers per case (to improve quality) Multiple workers per case (to improve quality)[Sheng et al KDD 2008; Kumar and Lease CSDM 2011] et al, KDD 2008; Kumar and Lease, CSDM 2011] 58
  57. 57. Dawid & Skene meets a Classifier & Skene meets a Classifier• [Raykar et al. JMLR 2010]: Use the et al. JMLR 2010]: Use the  Dawid&Skene scheme but add a classifier as  an additional worker an additional worker• Classifier in each iteration learns from the  consensus labeling 59
  58. 58. Selective Repeated Labeling Selective Repeated‐Labeling• We do not need to label everything same number of times• Key observation: we have additional information to guide  selection of data for repeated labeling p g the current multiset of labels • Example: {+ ‐ + ‐ ‐ +} vs {+ + + + + +} Example:  {+,‐,+,‐,‐,+} vs. {+,+,+,+,+,+} 60
  59. 59. Label Uncertainty: Focus on uncertainty• If we know worker qualities, we can estimate log‐odds for each  q , g example:• Assign labels first to examples that are most uncertain (log‐ odds close to 0 for binary case)
  60. 60. + + ‐ ‐‐ ‐ + + + + + + ‐ ‐‐ ‐ ‐ + ‐ ‐ ‐Model Uncertainty (MU) + + + + + + ++ + +‐ ‐ ‐ ‐‐ ‐ ‐ + + + + ‐‐‐‐ ‐‐‐‐ ‐‐ ‐‐• Learning models of the data provides an  alternative source of information about label  certainty• M d l Model uncertainty: get more labels for instances  t i t t l b l f i t Examples that cause model uncertainty• Intuition? Models – for modeling: why improve training data quality if  “Self‐healing” process model already is certain there? [Brodley et al, JAIR 1999] [Ipeirotis et al NYU 2010] et al, NYU 2010]  – for data quality, low‐certainty “regions” may be due to  incorrect labeling of corresponding instances 62
  61. 61. Adult content classification Round RobinSelective labeling 63
  62. 62. Too much theory? Too much theory? Open source implementation available at: p p http://code.google.com/p/get‐another‐label/• Input:  – Labels from Mechanical Turk – Cost of incorrect labelings (e.g., XG costlier than GX)• Output: Output:  – Corrected labels – Worker error rates – Ranking of workers according to their quality
  63. 63. Learning from imperfect data Learning from imperfect data 100• With inherently noisy With inherently noisy  90 80 Accuracy data, good to have  70 60 learning algorithms that  learning algorithms that 50 are robust to noise. 40 0 0 0 0 0 0 0 0 0 0 0 1 20 40 60 80 18 20 22 24 26 28 30 10 12 14 16 Number of examples (Mushroom)• Or use techniques  designed to handle  d i d h dl explicitly noisy data [Lugosi 1992; Smyth, 1995, 1996]
  64. 64. Outline• Introduction: Human computation and crowdsourcing• Managing quality for simple tasks• Complex tasks using workflows• Task optimization• Incentivizing the crowd Incentivizing the crowd• Market design• Behavioral aspects and cognitive biases Behavioral aspects and cognitive biases• Game design• Case studies d
  65. 65. How to handle free‐form answers?• Q: “My task does not have discrete answers….”• A: Break into two HITs:  – “C t ” HIT “Create” – “Vote” HIT Creation HIT Voting HIT: (e.g. find a URL about a topic) Correct or not?• Vote HIT controls quality of Creation HIT Vote HIT controls quality of Creation HIT• Redundancy controls quality of Voting HIT• Catch: If “creation” very good, in voting workers just vote “yes” – Solution: Add some random noise (e.g. add typos) Example: Collect URLs
  66. 66. But my free‐form is  Describe this j g gnot just right or wrong…• “Create” HIT• “Improve” HIT p• “Compare” HIT Creation HIT ( g (e.g. describe the image) g ) Improve HIT Compare HIT (voting)(e.g. improve description) Which is better? TurkIt toolkit [Little et al., UIST 2010]: http://groups.csail.mit.edu/uid/turkit/
  67. 67. version 1: A parial view of a pocket calculator together with  some coins and a pen.version 2:version 2: A view of personal items a calculator, and some gold and  copper coins, and a round tip pen, these are all pocket and wallet sized item used for business, writting, calculating  prices or solving math problems and purchasing items.version 3: A close‐up photograph of the following items:  A CASIO  multi‐function calculator. A ball point pen, uncapped.  Various coins, apparently European, both copper and gold.  Seems to be a theme illustration for a brochure or document  b h ll f b h d cover treating finance, probably personal finance.version 4: …Various British coins; two of £1 value, three of 20p value  and one of 1p value. … and one of 1p valueversion 8:  “A close‐up photograph of the following items: A  CASIO multi‐function, solar powered scientific  calculator. A blue ball point pen with a blue rubber  grip and the tip extended. Six British coins; two of £1  value, three of 20p value and one of 1p value. Seems  to be a  theme illustration for a brochure or  document cover treating finance ‐ probably personal  finance."
  68. 68. Independence or Not? Independence or Not? • Building iteratively (lack of independent) allows better  outcomes for image description task… • In the FoldIt game workers built on each other’s results In the FoldIt game, workers built on each other s results[Little et al, HCOMP 2010]
  69. 69. Independence or Not? • But lack of independence But lack of independence  may cause high  dependence on starting  conditions and create  conditions and create groupthink p • …but also prevents  disasters[Little et al, HCOMP 2010]
  70. 70. Independence or Not? Collective Problem Solving • Exploration / exploitation tradeoff  – Can accelerate learning, by sharing good solutions – But can lead to premature convergence on  suboptimal solution[Mason and Watts, submitted to Science, 2011]
  71. 71. Individual search strategy affects group successIndividual search strategy affects group success • More players copying More players copying  each other (i.e., fewer  exploring) in current  round  Lower probability of  finding peak on next  roundd
  72. 72. The role of Communication NetworksThe role of Communication Networks• Examine various “neighbor” structures  g (who talks to whom about the oil levels)
  73. 73. Network structure affects individual search strategy• Higher clustering   Higher probability of  neighbors guessing in  neighbors guessing in identical location• More neighbors guessing  in identical location  Higher probability of  copying
  74. 74. Diffusion of Best SolutionDiffusion of Best Solution
  75. 75. Diffusion of Best SolutionDiffusion of Best Solution
  76. 76. Diffusion of Best SolutionDiffusion of Best Solution
  77. 77. Diffusion of Best SolutionDiffusion of Best Solution
  78. 78. Diffusion of Best SolutionDiffusion of Best Solution
  79. 79. Diffusion of Best SolutionDiffusion of Best Solution
  80. 80. Diffusion of Best SolutionDiffusion of Best Solution
  81. 81. Diffusion of Best SolutionDiffusion of Best Solution
  82. 82. Individual search strategy affects group successIndividual search strategy affects group success• No significant No significant  differences in % of  games in which peak  was found• Network affects  willingness to explore
  83. 83. Network structure affects group successNetwork structure affects group success
  84. 84. TurKontrol: Decision Theoretic ModelingTurKontrol: Decision‐Theoretic Modeling• Optimizing workflow execution using decision Optimizing workflow execution using decision‐ theoretic approaches [Dai et al, AAAI 2010; Kern et al. 2010]• Si ifi Significant work in control theory [Montgomery, 2007] ki l h
  85. 85. http://www.workflowpatterns.com Common Workflow Patterns Common Workflow PatternsBasic Control FlowBasic Control Flow • Iteration• Sequence • Arbitrary Cycles  (goto)• Parallel Split • Structured Loop (for,  while, repeat) while, repeat)• Synchronization • Recursion• Exclusive Choice• Simple Merge
  86. 86. Soylent• Word processor with crowd embedded [Bernstein et al, UIST 2010]• “Proofread paper”: Ask workers to proofread each paragraph –LLazy Turker: Fixes the minimum possible (e.g., single typo) T k Fi th i i ibl ( i l t ) – Eager Beaver: Fixes way beyond the necessary but adds  extra errors (e.g., inline suggestions on writing style)• Find‐Fix‐Verify pattern – Separate Find and Fix, does not allow Lazy Turker – Separate Fix‐Verify ensured quality
  87. 87. Find “Identify at least one area t at can that ca be s o te ed shortened without changing the meaning of the paragraph.” Independent agreement to identify patchesFix “Edit the highlighted g g section to shorten its length without changing Soylent, a prototype... the meaning of the paragraph.” Randomize order of suggestionsVerify “Choose at least one rewrite that has style errors, and at least one rewrite that changes the meaning of the sentence.”
  88. 88. Crowd‐created Workflows: CrowdForge• Map‐Reduce framework for crowds [Kittur et al, CHI  2011] – Identify sights worth checking out ( (one tip per worker)) • Vote and rank – Brief tips for each monument ( (one tip p worker) p per ) • Vote and rank – Aggregate tips in meaningful summary • It t to improve… Iterate t iMy Boss is a Robot (mybossisarobot.com),  Nikki Kittur (CMU) + Jim Giles (New Scientist)
  89. 89. Crowd‐created Workflows: TurkoMatic• Crowd creates workflows• Turkomatic [Kalkani et al, CHI 2011]: 1. Ask workers to decompose task into steps (Map) 2. Can step be completed within 10 minutes? 1. Yes: solve it. 2. 2 No: decompose further (recursion) 3. Given all partial solutions, solve big problem (Reduce)
  90. 90. Crowdsourcing Patterns Crowdsourcing Patterns• Generate / Create i d• Find Creation C ti• Improve / Edit / Fix• Vote for accept‐reject Q Quality  y• Vote up, vote down, to generate rank Control• Vote for best / select top‐k• Split task• Aggregate Flow Control Flow Control
  91. 91. Outline• Introduction: Human computation and crowdsourcing• Managing quality for simple tasks• Complex tasks using workflows• Task optimization• Incentivizing the crowd Incentivizing the crowd• Market design• Behavioral aspects and cognitive biases Behavioral aspects and cognitive biases• Game design• Case studies d
  92. 92. Defining Task Parameters Defining Task ParametersThree main goals:Three main goals:• Minimize Cost (cheap) i i i C ( h )• Maximize Quality (good)• Minimize Completion Time (fast)
  93. 93. Effect of Payment: Quality Effect of Payment: Quality• Cost does not affect quality [Mason and Watts, 2009, AdSafe]• Similar results for bigger tasks [Ariely et al, 2009] 0.45 0.40 0.35 0.30 Error Rate 0.25 2cents 5cents 0.20 10cents 0.15 0 0.10 0.05 0.00 0 10 20 30 Number of Labelers
  94. 94. Effect of Payment: #Tasks Effect of Payment: #Tasks• Payment incentives increase speed, though [Mason and Watts, 2009]
  95. 95. Predicting Completion Time• Model timing of individual task  [Yan, Kumar, Ganesan, 2010] – Assume rate of task completion λ – Exponential distribution for  single task g – Erlang distribution for sequential  tasks – On the fly estimation of λ for On‐the‐fly estimation of λ for  parallel• Optimize using early  acceptance/termination  – Sequential experiment setting – Stop early if confident Stop early if confident
  96. 96. Prediction Completion Time Prediction Completion Time• For Freebase, workers use log‐normal time to  complete a task [Kochhar et al, HCOMP 2010]
  97. 97. Predicting Completion Time• Exponential assumption usually not realistic• H il d di ib i Heavy‐tailed distribution [Ipeirotis, XRDS 2010]
  98. 98. Effect of #HITs: Monotonic, but sublinear h(t) = 0.998^#HITs• 10 HITs  2% slower than 1 HIT• 100 HITs  19% slower than 1 HIT • 1000 HITs  87% slower than 1 HIT 1000 HITs  87% slower than 1 HIT  or, 1 group of 1000  7 times faster than 1000 sequential groups of 1 [Wang et al, CSDM 2011]
  99. 99. HIT Topicstopic 1 : cw castingwords  podcast  transcribe  english  mp3  edit  confirm  snippet  gradetopic 2:  d i 2 data  collection  search  image  entry  listings  website  review  survey  opinion ll i h i li i bi i i itopic 3:  categorization  product  video  page  smartsheet web  comment  website  opiniontopic 4:  easy  quick  survey  money  research  fast  simple  form  answers  linktopic 5:  question  answer  nanonano dinkle article  write  writing  review  blog  articlestopic 6:  writing  answer  article  question  opinion  short  advice  editing  rewriting  paultopic 7:  transcribe  transcription  improve  retranscribe edit  answerly voicemail  answer [Wang et al, CSDM 2011]
  100. 100. Effect of Topic: The CastingWords Effect topic 1 : cw castingwords  podcast  transcribe  english  mp3  edit  confirm  snippet  grade topic 2:  data  collection  search  image  entry  listings  website  review  survey  opinion topic 3:  categorization  product  video  page  smartsheet web  comment  website  opinion topic 4:  easy  quick  survey  money  research  fast  simple  form  answers  link topic 5:  question  answer  nanonano dinkle article  write  writing  review  blog  articles p q g g topic 6:  writing  answer  article  question  opinion  short  advice  editing  rewriting  paul topic 7:  transcribe  transcription  improve  retranscribe edit  answerly voicemail  query  question  answer [Wang et al, CSDM 2011]
  101. 101. Effect of Topic: Surveys=fast (even with redundancy!) topic 1 : cw castingwords  podcast  transcribe  english  mp3  edit  confirm  snippet  grade topic 2:  data  collection  search  image  entry  listings  website  review  survey  opinion topic 3:  categorization  product  video  page  smartsheet web  comment  website  opinion topic 4:  easy  quick  survey  money  research  fast  simple  form  answers  link topic 5:  question  answer  nanonano dinkle article  write  writing  review  blog  articles p q g g topic 6:  writing  answer  article  question  opinion  short  advice  editing  rewriting  paul topic 7:  transcribe  transcription  improve  retranscribe edit  answerly voicemail  query  question  answer [Wang et al, CSDM 2011]
  102. 102. Effect of Topic: Writing takes time topic 1 : cw castingwords  podcast  transcribe  english  mp3  edit  confirm  snippet  grade topic 2:  data  collection  search  image  entry  listings  website  review  survey  opinion topic 3:  categorization  product  video  page  smartsheet web  comment  website  opinion topic 4:  easy  quick  survey  money  research  fast  simple  form  answers  link topic 5:  question  answer  nanonano dinkle article  write  writing  review  blog  articles p q g g topic 6:  writing  answer  article  question  opinion  short  advice  editing  rewriting  paul topic 7:  transcribe  transcription  improve  retranscribe edit  answerly voicemail  query  question  answer [Wang et al, CSDM 2011]
  103. 103. Optimizing Completion Time Optimizing Completion Time• Workers pick tasks that have large number of Workers pick tasks that have large number of  HITs or are recent [Chilton et al., HCOMP 2010]• VizWiz optimizations [[Bingham, UIST 2011] : optimizations  i h S 20 ] – Posts HITs continuously (to be recent)  – Mk bi HIT Mkes big HIT groups (to be large) ( b l ) – HITs are “external HITs” (i.e., IFRAME hosted) – HITs populated when the worker accepts them
  104. 104. Optimizing Completion Time Optimizing Completion Time• Completion rate varies with Completion rate varies with  time of day, depending on  the audience location (India  the audience location (India vs US vs Middle East)• Quality tends to remain the  same, independent of  same independent of completion time  [Huang et al., HCOMP 2010] [Huang et al., HCOMP 2010]
  105. 105. Other Optimizations Other Optimizations• Qurk [Markus et al., CIDR 2011] and CrowdDB [Franklin et al., SIGMOD 2011]:  [ ] Treat humans as uncertain UDFs + apply relational  optimization, plus the “GoodEnough” and “StopAfter”  operator. operator• CrowdFlow [Quinn et al ]: Integrate crowd with machine [Quinn et al.]: Integrate crowd with machine  learning to reach balance of speed, quality, cost• Ask humans for directions in a graph: [Parameswaran et  al., VLDB 2011]. See also [Kleinberg, Nature 2000;  Mitzenmacher, XRDS 2010; Deng, ECCV 2010] Mitzenmacher XRDS 2010; Deng ECCV 2010]
  106. 106. Outline• Introduction: Human computation and crowdsourcing• Managing quality for simple tasks• Complex tasks using workflows• Task optimization• Incentivizing the crowd Incentivizing the crowd• Market design• Behavioral aspects and cognitive biases Behavioral aspects and cognitive biases• Game design• Case studies d
  107. 107. Incentives• Monetary• Self‐serving• Altruistic l i i
  108. 108. Incentives: Money Incentives: Money• Money does not improve quality but (generally) Money does not improve quality but  (generally)  increase participation [Ariely, 2009; Mason & Watts, 2009]• But workers may be “target earners” (stop after  reaching their daily goal) [Horton & Chilton, 2010 for MTurk;  hi h i d il l) Camerer et al. 1997, Farber 2008, for taxi drivers; Fehr and Goette 2007]
  109. 109. Incentives: Money and Trouble Incentives: Money and Trouble• Careful: Paying a little often worse than paying  y g p y g nothing!  – “Pay enough or not at all” [Gneezy et al, 2000] – Small pay now locks future pay Small pay now locks future pay – Payment replaces internal motivation (paying kids to collect  donations decreased enthusiasm; spam classification; “thanks for  dinner, here is $100 ) dinner here is $100”) – Lesson: Be the Tom Sawyer (“how I like painting the  fence”), not the scrooge‐y boss…• Paying a lot is a counter‐incentive:  – People focus on the reward and not on the task People focus on the reward and not on the task – On MTurk spammers routinely attack highly‐paying tasks
  110. 110. Incentives• Monetary• Self‐serving• Altruistic l i i
  111. 111. Incentives: Leaderboards Incentives: Leaderboards• Leaderboards (“top participants”) frequent ( top participants ) frequent  motivator – Should motivate correct behavior not just Should motivate correct behavior, not just  measurable behavior – Newcomers should have hope of reaching top Newcomers should have hope of reaching top – Whatever is measured, workers will optimize for  this (e.g., Orkut country leaderboard; complaints for quality score drops) this (e.g., Orkut country leaderboard; complaints for quality score drops) – Design guideline: Christmas‐tree dashboard (Green / Red lights only) [Farmer and Glass, 2010]
  112. 112. Incentives: Purpose of Work Incentives: Purpose of Work• Contrafreeloading: Rats and animals prefer to Contrafreeloading: Rats and animals prefer to  “earn” their food• Destroying work after production demotivates workers. [Ariely et al, 2008] k [A i l l 2008]• Showing result of “completed task” improves  satisfaction
  113. 113. Incentives: Purpose of Work Incentives: Purpose of Work• Workers enjoy learning new skills (oft cited reason for  j y g ( Mturk participation)• Design tasks to be educational – DuoLingo: Translate while learning new language [von Ahn et al,  duolingo.com] – Galaxy Zoo, Clickworkers: Classify astronomical objects  [Raddick et al, 2010; http://en.wikipedia.org/wiki/Clickworkers] – Citizen Science: Learn about biology  gy [http://www.birds.cornell.edu/citsci/] – National Geographic “Field Expedition: Mongolia”, tag  potential archeological sites, learn about archeology potential archeological sites, learn about archeology
  114. 114. Incentives: Credit and ParticipationIncentives: Credit and Participation• Public credit contributes to sense of Public credit contributes to sense of  participation• Credit also a form of reputation Credit also a form of reputation• (Anonymity of MTurk‐like settings discourage this factor)
  115. 115. Incentives• Monetary• Self‐serving• Altruistic l i i
  116. 116. Incentive: Altruism Incentive: Altruism• Contributing back (tit for tat): Early reviewers Contributing back (tit for tat): Early reviewers  writing reviews because read other useful  review• Effect amplified in social networks: “If all my Effect amplified in social networks:  If all my  friends do it…” or “Since all my friends will see  this…”• Contributing to shared goal Contributing to shared goal
  117. 117. Incentives: Altruism and Purpose Incentives: Altruism and Purpose• On MTurk [Chandler and Kapelner 2010] On MTurk [Chandler and Kapelner, 2010] – Americans [older, more leisure‐driven] work  harder for  meaningful work harder for “meaningful work” – Indians [more income‐driven] were not affected  – Quality unchanged for both groups Quality unchanged for both groups
  118. 118. Incentives: Fair share Incentives: Fair share• Anecdote: Same HIT (spam classification) Anecdote: Same HIT (spam classification) – Case 1: Requester doing as side‐project, to “clean  the market would be out‐of‐pocket expense no the market”, would be out of pocket expense, no  pay to workers – Case 2: Requester researcher at university, spam Case 2: Requester researcher at university, spam  classification now a university research project,  payment to workers What setting worked best?
  119. 119. Incentives: FUN! Incentives: FUN!• Game‐ify the task (design details later)• Examples p – ESP Game: Given an image, type the same  word (generated image descriptions) – Phylo: aligned color blocks (used for genome Phylo: aligned color blocks (used for genome  alignment) – FoldIt: fold structures to optimize energy  (protein folding) (protein folding)• Fun factors [Malone 1980, 1982]: – ti d timed response – score keeping – player skill level – high‐score lists – and randomness
  120. 120. Outline• Introduction: Human computation and crowdsourcing• Managing quality for simple tasks• Complex tasks using workflows• Task optimization• Incentivizing the crowd Incentivizing the crowd• Market design• Behavioral aspects and cognitive biases Behavioral aspects and cognitive biases• Game design• Case studies d
  121. 121. Market Design Organizes the CrowdMarket Design Organizes the Crowd• Reputation Mechanisms  eputat o ec a s s – Seller‐side: Ensure worker quality  – Buy‐side: Ensure employee trustworthiness• Task organization for task discovery (worker finds  employer/task) / )• Worker expertise recording for task assignment  (employer/task finds worker)
  122. 122. Lack of Reputation and Market for LemonsLack of Reputation and Market for Lemons• “When quality of sold good is uncertain and hidden before  transaction, prize goes to value of lowest valued good transaction, prize goes to value of lowest valued good”  [Akerlof, 1970; Nobel prize winner]Market evolution steps:1. Employers pays $10 to good worker, $0.1 to bad worker2. 50% good workers, 50% bad; indistinguishable from each other2 50% d k 50% b d d h bl f h h3. Employer offers price in the middle: $54. Some good workers leave the market (pay too low)4 Some good workers leave the market (pay too low)5. Employer revised prices downwards as % of bad increased6. More good workers leave the market… death spiral  g p http://en.wikipedia.org/wiki/The_Market_for_Lemons
  123. 123. Lack of Reputation and Market for Lemons Lack of Reputation and Market for Lemons• Market for lemons also on the employer side: – Workers distrust (good) newcomer employers: Charge risk premium,  or work only for little bit. Good newcomers get disappointed – Bad newcomers have no downside (will not pay), continue to offer  work. work – Market floods with bad employers• TurkOpticon, external reputation system• “Mechanical Turk: Now with 40.92% spam” http://bit.ly/ew6vg4 • Greshams Law: the bad drives out the good• No‐trade equilibrium: no good employer offers work in a No trade equilibrium: no good employer offers work in a  market with bad workers, no good worker wants to work for  bad employers…• In reality, we need to take into consideration that this is a In reality, we need to take into consideration that this is a  repeated game (but participation follows a heavy tail…) http://en.wikipedia.org/wiki/The_Market_for_Lemons
  124. 124. Reputation systems Reputation systems• Significant number of reputation mechanisms Significant number of reputation mechanisms  [Dellarocas et al, 2007]• Link analysis techniques [TrustRank, EigenTrust,  NodeRanking, NetProbe, Snare] often applicable f li bl
  125. 125. Challenges in the Design of Reputation Systems • Insufficient participation p p • Overwhelmingly positive feedback • Dishonest reports • Identity changes Identity changes • Value imbalance exploitation (“milking the  reputation”)
  126. 126. Insufficient Participation • Free‐riding: feedback constitutes a public good. Once available,  everyone can costless‐ly benefit from it. • Disadvantage of early evaluators: provision of feedback  p presupposes that the rater will assume the risks of transacting with  pp g the ratee (competitive advantage to others). • [A [Avery et al. 1999] propose a mechanism whereby early  l 1999] h i h b l evaluators are paid to provide information and later evaluators  pay to balance the budget.
  127. 127. Overwhelmingly Positive Feedback (I) More than 99% of all feedbacks posted on eBay are positive.  H I i d f 16% f ll However, Internet auctions accounted for 16% of all consumer  fraud complaints received by the Federal Trade Commission in  2004. (http://www.consumer .gov/sentinel/) Reporting Bias The perils of reciprocity: • Reciprocity: Seller evaluated buyer, buyer evaluates seller • Exchange of courtesies • Positive reciprocity: positive ratings are given in the hope  of getting a positive rating in return • Negative reciprocity: negative ratings are avoided because  of fear of retaliation from the other party
  128. 128. Overwhelmingly Positive Feedback (II) “The sound of silence”: No news, bad news… • [Dellarocas and Wood 2008] Explore the frequency of  different feedback patterns and use the non‐reports to  compensate for reporting bias. • eBay traders are more likely to post feedback when satisfied  than when dissatisfied than when dissatisfied • Support presence of positive and negative reciprocation  among eBay traders.
  129. 129. Dishonest Reports • “Ballot stuffing” (unfairly high ratings): a seller colludes with a  group of buyers in order to be given unfairly high ratings by them. group of buyers in order to be given unfairly high ratings by them • “Bad‐mouthing” (unfairly low ratings): Sellers can collude with  buyers in order to “bad‐mouth” other sellers that they want to drive  y y out the market. • Design incentive‐compatible mechanism to elicit honest feedbacks [ [Jurca and Faltings 2003: pay rater if report matches next;  g p y p ; Miller et al. 2005: use a proper scoring rule to price value of report; Papaioannou and Stamoulis 2005: delay next transaction over time] • U “l t t l ” Use “latent class” models described earlier in the tutorial  d l d ib d li i th t t i l (reputation systems is a form of crowdsourcing after all…)
  130. 130. Identity Changes • “Cheap pseudonyms”: easy to disappear and re‐ pp y y pp register under a new identity with almost zero cost.  [Friedman and Resnick 2001] • I Introduce opportunities to misbehave without paying  d ii ib h ih i reputational consequences.   • Increase the difficulty of online identity changes • Impose upfront costs to new entrants: allow new identities  (forget the past) but make it costly to create them
  131. 131. Value Imbalance Exploitation Three men attempt to sell a fake painting on eBay for $US  135,805. The sale was abandoned just prior to purchase when  135 805 Th l b d dj i h h the buyer became suspicious.(http://news.cnet.com/2100‐ 1017‐253848.html) • Reputation can be seen as an asset not only to Reputation can be seen as an asset, not only to  promote oneself, but also as something that can be  cashed in through a fraudulent transaction with high  g g gain. “The Market for Evaluations”
  132. 132. The Market for Positive Feedbacks A selling strategy that eBay users are actually using the  g gy y y g feedback market for gains in other markets. “Riddle for a PENNY! No shipping‐Positive Feedback” Riddle for a PENNY! No shipping Positive Feedback • 29‐cent loss even in the event of a successful sale • Price low, speed feedback accumulationPossible solutions: • Make the details of the transaction (besides the feedback itself) visible to other users • T Transaction‐weighted reputational statistics  ti i ht d t ti l t ti ti [Brown 2006]
  133. 133. Challenges for Crowdsourcing Markets (I)• Two‐sided opportunistic behavior • Reciprocal systems worse than one‐side evaluation. In e‐commerce  markets, only sellers are likely to behave opportunistically. No need for  markets only sellers are likely to behave opportunistically No need for reciprocal evaluation! • In crowdsourcing markets, both sides can be fraudulent. Reciprocal  systems are fraught with problems, though!• I Imperfect monitoring and heavy‐tailed participation f t it i dh t il d ti i ti • In e‐commerce markets, buyers can assess the product quality directly  upon receiving. • In crowdsourcing markets, verifying the answers is sometimes as costly as  providing them. • Sampling often does not work due to heavy tailed participation Sampling often does not work, due to heavy tailed participation  distribution (lognormal, according to self‐reported surveys)
  134. 134. Challenges for Crowdsourcing Markets (II) • Constrained capacity of workers • In e‐commerce markets sellers usually have unlimited supply of In e commerce markets, sellers usually have unlimited supply of  products. • In crowdsourcing, workers have constrained capacity (cannot be  recommended continuously) • No “price premium” for high quality workers No “price premium” for high‐quality workers • In e‐commerce markets, sellers with high reputation can sell their  products at a relatively high price (premium). • In crowdsourcing, it is the requester who set the prices, which are  generally the same for all the workers.
  135. 135. Market Design Organizes the CrowdMarket Design Organizes the Crowd• Reputation Mechanisms  eputat o ec a s s – Seller‐side: Ensure worker quality  – Buy‐side: Ensure employee trustworthiness• Task organization for task discovery (worker finds  employer/task) / )• Worker expertise recording for task assignment  (employer/task finds worker)
  136. 136. The Importance of Task Discovery Heavy tailed distribution of completion times. Why?• Heavy tailed distribution of completion times Why? [Ipeirotis, “Analyzing the Amazon Mechanical Turk marketplace”, XRDS 2010]
  137. 137. The Importance and Danger of PrioritiesThe Importance and Danger of Priorities• [Barabasi, Nature 2005] showed that human actions  [ , ] have power‐law completion times – Mainly result of prioritization – Wh t k When tasks ranked by priorities, power‐law results k db i iti l lt• [Cobham 1954] If queuing system completes tasks [Cobham, 1954] If queuing system completes tasks  with two priority queues, and λ=μ, then power‐law  completion times• [Chilton et al., HCOMP 2010] Workers on Turk pick  tasks from  most HITs or most recent queues tasks from “most HITs” or “most recent” queues
  138. 138. The UI hurts the market!• Practitioners know that HITs in 3rd page and after,  p g , are not picked by workers. • Many such HITs are left to expire after months Many such HITs are left to expire after months,  never completed.• Badly designed task discovery interface hurts every  participant in the market! (and the reason for scientific modeling…) ti i t i th k t!• Better modeling as a queuing system may  demonstrate other such improvements
  139. 139. Market Design Organizes the CrowdMarket Design Organizes the Crowd• Reputation Mechanisms  eputat o ec a s s – Seller‐side: Ensure worker quality  – Buy‐side: Ensure employee trustworthiness• Task organization for task discovery (worker finds  employer/task) / )• Worker expertise recording for task assignment  (employer/task finds worker)
  140. 140. Expert Search Expert Search• Find the best worker for a task (or within a task) Find the best worker for a task, (or within a task)• For a task: k – Significant amount of research in the topic of expert  search [TREC track; Macdonald and Ounis, 2006] h – Check quality of workers across tasks http://url‐annotator.appspot.com/Admin/WorkersReport http://url annotator appspot com/Admin/WorkersReport• Within a task: [Donmez et al., 2009; Welinder, 2010] t a tas : [ o e et a , 009; e de , 0 0]
  141. 141. Directions for future research Directions for future research• Optimize allocation of tasks to worker based on completion  time and expected quality• Explicitly take into consideration competition in market and Explicitly take into consideration competition in market and  switch task for worker only when benefit outweighs  switching overhead (task switching in CPU from O/S)• Recommender system for tasks (“workers like you  performed well in…”)• Create a market with dynamic pricing for tasks, following  the pricing model of the stock market (prices increase for  task when work supply low, and vice versa) task when work supply low and vice versa)
  142. 142. Outline• Introduction: Human computation and crowdsourcing• Managing quality for simple tasks• Complex tasks using workflows• Task optimization• Incentivizing the crowd Incentivizing the crowd• Market design• Behavioral aspects and cognitive biases Behavioral aspects and cognitive biases• Game design• Case studies d
  143. 143. Human Computation• Humans are not perfect mathematical models Humans are not perfect mathematical models• They exhibit noisy, stochastic behavior… h hibi i h i b h i• And exhibit common and systematic biases
  144. 144. Score the following from 1 to 10  1: not particularly bad or wrong 1 t ti l l b d 10: extremely evil a) Stealing a towel from a hotel b) Keeping a dime you find on the ground  p g y gc) Poisoning a barking dog [Parducci, 1968]
  145. 145. Score the following from 1 to 10  1: not particularly bad or wrong 1 t ti l l b d 10: extremely evil a) Testifying falsely for pay gg gb) Using guns on striking workersc) Poisoning a barking dog [Parducci, 1968]
  146. 146. Anchoring  Anchoring• “Humans start with a first approximation (anchor) and  f pp ( ) then make adjustments to that number based on  additional information.” [Tversky & Kahneman, 1974]• [Paolacci et al, 2010] – Q1a: More or less than 65 African countries in UN? Q1a: More or less than 65 African countries in UN? – Q1b: More or less than 12 African countries in UN? – Q2: How many countries in Africa? – Group A mean: 42.6 – Group B mean: 18 5 Group B mean: 18.5
  147. 147. Anchoring  Anchoring• Write down the last digit of their social security  g f y number before placing bid for wine bottles. Users  with lower SSN numbers bid lower…• In the Netflix contest, user with high ratings early  on, biased towards higher ratings later in a  on, biased towards higher ratings later in a session…• Crowdsourcing tasks can be affected by  anchoring. [Moren et al, NIPS 2010] describe  techniques for removing effects techniques for removing effects
  148. 148. Priming• Exposure to one stimulus influences another Exposure to one stimulus influences another• Stereotypes:  – Asian americans perform better in math Asian‐americans perform better in math – Women perform worse in math• [Shih et al., 1999] asked Asian‐American women: – Q ti Questions about race: They did better in math test b t Th did b tt i th t t – Questions about gender: They did worse in math test
  149. 149. Exposure Effect Exposure Effect• Familiarity leads to liking Familiarity leads to liking...• [S [Stone and Alonso, 2010]: Evaluators of Bing  d l 20 0] l f i search engine increase their ratings of  relevance over time, for the same results l i f h l
  150. 150. Framing• Presenting the same option in different Presenting the same option in different  formats leads to different formats. People  avert options that imply loss [Tversky and  avert options that imply loss [Tversky and Kahneman (1981)]
  151. 151. Framing:  600 people affected by deadly disease 600 people affected by deadly diseaseRoom 1a) save 200 peoples lives save 200 people s livesb) 33% chance of saving all 600 people and a 66% chance saving no one• 72% of participants chose option A 72% of participants chose option A• 28% of participants chose option BRoom 2Room 2c) 400 people died) 33% chance that no people will die; a 66% chance that all 600 will die• 78% of participants chose option D (equivalent to option B)• 22% of participants chose option C (equivalent to option A) People avert options that imply loss 
  152. 152. Very long list of cognitive biases… Very long list of cognitive biases…• http://en.wikipedia.org/wiki/List_of_cognitive_biases p // p g/ / g• [Mozer et al., 2010] try to learn and remove sequential effects  from human computation data…
  153. 153. Outline• Introduction: Human computation and crowdsourcing• Managing quality for simple tasks• Complex tasks using workflows• Task optimization• Incentivizing the crowd Incentivizing the crowd• Market design• Behavioral aspects and cognitive biases Behavioral aspects and cognitive biases• Game design• Case studies d
  154. 154. Games with a Purpose [Luis von Ahn and Laura Dabbish, CACM 2008]Three generic game structures• Output agreement:  – Type same output• I Input agreement:  – Decide if having same input• Inversion problem: Inversion problem:  – P1 generates output from input – P2 looks at P1‐output and guesses P1‐input
  155. 155. Output Agreement: ESP Game Output Agreement: ESP Game• Players look at common input Players look at common input• Need to agree on output
  156. 156. Improvements• Game‐theoretic analysis indicates that players Game theoretic analysis indicates that players  will converge to easy words [Jain and Parkes]• Solution 1: Add “Taboo words” to prevent   Solution 1: Add  Taboo words to prevent guessing easy words• S l i 2 Ki Ki B Solution 2: KissKissBan, third player tries to  hi d l i guess (and block) agreement
  157. 157. Input Agreement: TagATune p g g• Sometimes difficult to type identical output  (e.g., “describe this song”) p ,• Show same of different input, let users  describe, ask players if they have same input
  158. 158. Inversion Problem: Peekaboom Inversion Problem: Peekaboom• Non symmetric players Non‐symmetric players• Input: Image with word• Player 1 slowly reveals pic l l l l i• Player 2 tries to guess word
  159. 159. HINT
  160. 160. HINT
  161. 161. HINT
  162. 162. HINT
  163. 163. BUSHHINT
  164. 164. Protein folding Protein folding• Protein folding: Proteins fold from long chains into  small balls, each in a very specific shape• Shape is the lower energy setting which the most Shape is the lower‐energy setting, which the most  stable• Fold shape is very important to understand interactions with out molecules• Extremely expensive computationally! (too many  degrees of freedom)
  165. 165. FoldIt Game• Humans are very good at reducing the search Humans are very good at reducing the search  space• Humans try to fold the protein into a minimal  energy state. • Can leave protein unfinished and let others try  from there…
  166. 166. Outline• Introduction: Human computation and crowdsourcing• Managing quality for simple tasks• Complex tasks using workflows• Task optimization• Incentivizing the crowd Incentivizing the crowd• Market design• Behavioral aspects and cognitive biases Behavioral aspects and cognitive biases• Game design• Case studies d
  167. 167. Case Study: FreebaseCase Study: FreebasePraveen Paritosh, Google
  168. 168. Crowdsourcing Case Study AdSafe
  169. 169. 177
  170. 170. 178
  171. 171. 179
  172. 172. A few of the tasks in the past• Detect pages that discuss swine flu – Pharmaceutical firm had drug “treating” (off-label) swine flu – FDA prohibited pharmaceutical company to display drug ad in pages about swine flu – Two days to build and go live• Big fast-food chain does not want ad to appear: – In pages that discuss the brand (99% negative sentiment) – In pages discussing obesity – Three days to build and go live 180
  173. 173. Need to build models fast Need to build models fast • T diti Traditionally, modeling teams have invested substantial  ll d li t h i t d b t ti l internal resources in data formulation, information  extraction, cleaning, and other preprocessing No time for such things… • However, now, we can outsource preprocessing tasks, such  as labeling, feature extraction, verifying information  l b li f t t ti if i i f ti extraction, etc. – using Mechanical Turk, oDesk, etc. – quality may be lower than expert labeling (much?)  – but low costs can allow massive scale181
  174. 174. AdSafe workflow • Find URLs for a given topic (hate speech, gambling, alcohol  abuse, guns, bombs, celebrity gossip, etc etc) b b b l bi i ) http://url‐collector.appspot.com/allTopics.jsp • Classify URLs into appropriate categories Classify URLs into appropriate categories  http://url‐annotator.appspot.com/AdminFiles/Categories.jsp  • Mesure quality of the labelers and remove spammers http://qmturk.appspot.com/ htt // t k t / • Get humans to “beat” the classifier by providing cases where  the classifier fails http://adsafe‐beatthemachine.appspot.com/182
  175. 175. Case Study: OCR and ReCAPTCHACase Study: OCR and ReCAPTCHA
  176. 176. Scaling Crowdsourcing: Use Machine Learning Need to scale crowdsourcing Basic idea: Build a machine learning model and use it instead of humansNew case Automatic Model Automatic (through machine learning) Answer Existing data (through crowdsourcing)
  177. 177. Scaling Crowdsourcing: Iterative training Ti Triage: – machine when confident – humans when not confident Retrain using the new human input Automatic → improve model Answer → reduce need for human inputNew Case Automatic Model (through machine learning) Data from existing Get human(s) to answer crowdsourced answers
  178. 178. Scaling Crowdsourcing: Iterative training, with noise Machine when confident, humans otherwise Ask as many humans as necessary to ensure quality Automatic AnswerNew Case Automatic Model Not confident (through machine l (th h hi learning) i ) for quality? Data from existing Get human(s) to crowdsourced answers answer Confident for quality?
  179. 179. Scaling Crowdsourcing: Iterative training, with noise Machine when confident, humans otherwise Ask as many humans as necessary to ensure quality – Or even get other machines… machines Automatic AnswerNew Case Automatic Model Not confident (through machine l (th h hi learning) i ) about quality? Data from existing Get human(s) or crowdsourced answers other machines Confident about quality? to answer
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×