Incentives-driven technology design
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Incentives-driven technology design

  • 469 views
Uploaded on

Presentation at the 2nd summer school on quality experience

Presentation at the 2nd summer school on quality experience

More in: Education , Technology , Design
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
469
On Slideshare
463
From Embeds
6
Number of Embeds
1

Actions

Shares
Downloads
3
Comments
0
Likes
0

Embeds 6

http://sociam.org 6

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. INCENTIVESDRIVEN TECHNOLOGY DESIGN ELENA SIMPERL UNIVERSITY OF SOUTHAMPTON, UK 9/30/2013 2nd Qualinet Summer School 1 WITH SLIDES FROM ‘MAKING YOUR SEMANTIC APPLICATION ADDICTIVE: INCENTIVIZING USERS’, TUTORIAL@WIMS2012 BY ELENA SIMPERL, ROBERTA CUEL, AND MONIKA KACZMAREK
  • 2. CROWDSOURCING: PROBLEM SOLVING VIA OPEN CALLS "Simply defined, crowdsourcing represents the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. This can take the form of peer-production (when the job is performed collaboratively), but is also often undertaken by sole individuals. The crucial prerequisite is the use of the open call format and the large network of potential laborers.“ 9/30/2013 2nd Qualinet Summer School 2 [Howe, 2006]
  • 3. 9/30/2013 2nd Qualinet Summer School 3 CROWDSOURCING COMES IN DIFFERENT FORMS AND FLAVORS
  • 4. PROMINENT EXAMPLE: HUMAN COMPUTATION 9/30/2013 2nd Qualinet Summer School 4 Outsourcing tasks that machines find difficult to solve to humans
  • 5. 9/30/2013 2nd Qualinet Summer School 5 PROMINENT EXAMPLE: VOLUNTEERING PROJECTS
  • 6. 9/30/2013 2nd Qualinet Summer School 6 PROMINENT EXAMPLE: CHALLENGES
  • 7. 9/30/2013 2nd Qualinet Summer School 7 INCENTIVES AND MOTIVATION
  • 8. SUCCESSFULL CROWDSOURCING IS DIFFICULT TO REPLICATE • It crucially depends on • Choosing the right crowdsourcing approach • Effectiveness, eficiency, timeliness, (community building) • Understanding the behavior of the crowd; and • Aligning the goals of the application and the incentives it offers with the motivation of the users
  • 9. NOT EVERY FORM OF CROWDSOURCING IS FEASIBLE ALL THE TIME Tasks and application domain (decomposable, verifiable, skills/expertise) Overhead related to game interface design and further development Acceptance of performance assessments and their timeliness (no golden standard, community ratings) Privacy concerns related to microtask platforms (anonymous crowd) Acceptance issues with games in a productive environment Complex workflows need to integrate results from various crowdsourcing projects
  • 10. EXAMPLE: GAMES VS MICROTAKS • Tasks leveraging common human skills, appealing to large audiences • • Selection of domain and task more constrained in games to create typical UX Tasks decomposed into smaller units of work to be solved independently • Complex workflows • • Creating a casual game experience vs. patterns in microtasks Quality assurance • Synchronous interaction in games Levels of difficulty and near-real-time feedback in games Many methods applied in both cases (redundancy, votes, statistical techniques) Different set of incentives and motivators 9/30/2013 2nd Qualinet Summer School 10 • • •
  • 11. 11 GAMES VS MTURK
  • 12. UNDERSTANDING AND ALIGNING INCENTIVES IS ESSENTIAL Motivation: driving force that makes humans achieve their goals Incentives can be related to both extrinsic and intrinsic motivations. Incentives: ‘rewards’ assigned by an external ‘judge’ to a performer for undertaking a specific task Extrinsic motivation if task is considered boring, dangerous, useless, socially undesirable, dislikable by the performer. • Common belief (among economists): incentives can be translated into a sum of money for all practical purposes. Intrinsic motivation is driven by an interest or enjoyment in the task itself.
  • 13. INTRINSIC/EXTRINSIC MOTIVATIONS ARE WELL-STUDIED IN THE LITERATURE [Kaufman, Schulze, Veit]
  • 14. TYPES OF MOTIVATION – PERSON VS ARTIFACT Artifact Internal (embedded in structure, e.g., task, tools) External (additional to structure, external re-inforcements) Fun, joy, gaming, interest, satisfaction, self-actualization, self-reinforcement Social appreciation, reputation, love, trust, social capital, community support Extrinsic (additional to personal predispositions, external re-inforcements ) Usability, sociability, design-for-fun, curiosity, community-building support Material/financial capital, money, rewards, prices, medals, credit points 9/30/2013 2nd Qualinet Summer School 14 Person Intrinsic (predispositioned in person, e.g., drives, needs, desires)
  • 15. EXPLICIT VS. IMPLICIT CONTRIBUTIONS - AFFECTS MOTIVATION AND ENGAGEMENT Users aware of how their input contributes to the achievement of application’s goal (and identify themselves with it) vs. 9/30/2013 2nd Qualinet Summer School 15 Tasks are hidden behind the application narratives. Engagement ensured through other incentives
  • 16. WHY NOT JUST USE EXTERNAL REWARDS? Reward models often easier to study and control then motivation But 9/30/2013 2nd Qualinet Summer School 16 • It may require additional resources • It assumes performance can be feasibly measured • Different models to choose from: pay-per-time, payper-unit, winner-takes-it-all etc • Not always easy to abstract from social aspects (free-riding, social pressure) • May undermine intrinsic motivation
  • 17. MEASURING PERFORMANCE CAN BE CHALLENGING WHO AND HOW WHEN • Redundancy • Excluding spam and obviously wrong answers • Real-time constraints in games • Voting and ratings by the crowd • Assessment by the requester • Where does the ground truth come from and is it needed 9/30/2013 2nd Qualinet Summer School Note: improving recall of algorithms 17 • • Near-real-time microtasks, see Bernstein et al. Crowds in Two Seconds: Enabling Realtime CrowdPowered Interfaces. In Proc. UIST 2011.
  • 18. A FRAMEWORK OF ANALYSIS Goal Communication level (about the goal of the tasks) Participation level (in the definition of the goal) Clarity level High Medium Low High Medium Low High Low Tasks Variety of Specificity of Identification with Required skills High Medium Low High Medium Low High Low Highly specific Trivial Common Social Structure Nature of good being produced Hierarchy neutral Public good (non-rival nonexclusive) Hierarchical Private good (rival, exclusive)
  • 19. 9/30/2013 2nd Qualinet Summer School 19 EXAMPLE: GAMES WITH A PURPOSE
  • 20. WHAT TASKS CAN BE SUBJECT TO A GAME?* • Decomposable into simpler tasks • Nested tasks • Performance is measurable • Obvious rewarding scheme • Skills can be arranged in a smooth learning curve *http://www.lostgarden.com/2008/06/what-actitivies-that-can-be-turnedinto.html
  • 21. DIMENSIONS OF GWAP DESIGN WHAT IS THE PURPOSE OF THE GAME • Concrete specification of the task • • Example: annotation of a set of 500,000 images using free labels, controlled vocabulary etc Where does the input data come from? How much noise can you expect in the data? • Example: validating the results of algorithms; poor input data hampers UX HOW CAN IT BE TRANSLATED INTO DECOMPOSABLE TASKS Repetitive tasks vs. player experience; see motivation 9/30/2013 2nd Qualinet Summer School 21 •
  • 22. DIMENSIONS OF GWAP DESIGN (2) WHAT SUB-TASKS CAN YOU IDENTIFY • Number of interrelated steps in a casual game and granularity of tasks 9/30/2013 2nd Qualinet Summer School 22 HOW DOES THE HUMAN READABLE DESCRIPTION OF THE TASK LOOK LIKE
  • 23. DIMENSIONS OF GWAP DESIGN (3) HOW TO YOU MEASURE PERFORMANCE • Redundancy (output-agreement games) • Consensus (input agreement, cf Tag-A-Tune) • Describer - guesser WHAT DO USERS RECEIVE POINTS FOR, WHEN, AND HOW MANY • Mechanism design 9/30/2013 2nd Qualinet Summer School 23 Note: tasks cannot be too difficult, otherwise the tasks feel like work; they have to be interesting and intellectually challenging, otherwise the game is boring; players should be able to get better at it during the game.
  • 24. SINGLE VS. MULTIPLAYER GAMES Multi-player games • • • • UX (player appreciate social contact and intellectual challenge) Consensus mechanism, less spam Rapid feedback cycles But: requires players’ matching functionality and enough players in the system at the same time • • Can be simulated using bots and (lots of) pre-recorded rounds Single-player games • • Different quality assurance method (player receives reward once correct answer is determined); or Training data available to build initial profile 9/30/2013 2nd Qualinet Summer School 24 •
  • 25. 9/30/2013 2nd Qualinet Summer School 25 VERBOSITY AS INVERSION PROBLEM GAME
  • 26. 9/30/2013 2nd Qualinet Summer School 26 TASKS SHOULD BE SOLVABLE
  • 27. MECHANISM DESIGN Area of game theory • • • Game designer defining the structure of the game Game designer is interested in specific outcomes and attempts to influence players’ behavior to achieve these outcomes Different reward models can be applied • • • Pay-per-item vs winner-takes-it-all Competitions among individuals and teams How to price contributions • These parameters will change the behavior of the users in the system 9/30/2013 2nd Qualinet Summer School 27 •
  • 28. DIMENSIONS OF GWAP DESIGN (4) HOW DO YOU TRANSLATE CROWD INPUTS INTO VALIDATED ANSWERS • When are two answers the same • How many assignments per question • Player’s reliability, spam HOW DO YOU ASSIGN CHALLENGES TO PLAYERS Random vs based on previous performance • The same about players matching 9/30/2013 2nd Qualinet Summer School 28 •
  • 29. 9/30/2013 2nd Qualinet Summer School 29 EXAMPLE: GAMIFICATION
  • 30. TASTE IT! TRY IT! • Restaurant review Android app developed in the Insemtives project • Uses Dbpedia concepts to generate structured reviews • Uses mechanism design/gamification to configure incentives • User study • 2274 reviews by 180 reviewers referring to 900 restaurants, using 5667 Dbpedia concepts 2500 2000 1500 1000 500 0 CAFE FASTFOOD PUB RESTAURANT Numer of reviews Number of semantic annotations (type of cuisine) Number of semantic annotations (dishes) 9/30/2013 2nd Qualinet Summer School 30 https://play.google.com/store/apps/details?id=insemtives.android&hl=en
  • 31. 9/30/2013 Tutorial@ESWC2013 31 SOCIABILITY DESIGN ASPECTS
  • 32. MECHANISM DESIGN EXPERIMENTS Two experiments: 150 and 30 students • Points vs. badges • No information about others vs. information about others (neighborhood, median, full leaderboard) Findings 9/30/2013 Tutorial@ESWC2013 32 • Presenting information on performance of peers helps to increase the number of reviews • Within the treatments with badges individuals tend to contribute more compared to treatments without assignment of badges
  • 33. INTERPLAY OF INCENTIVES AND MOTIVATION ACHIEVES MAXIMAL RESULTS Focus on the actual goal and incentivize related actions • Write posts, create graphics, annotate pictures, reply to customers in a given time… Build a community around the intended actions • Reward helping each other in performing the task and interaction • Reward recruiting new contributors Reward repeated actions • Actions become part of the daily routine
  • 34. EXAMPLE: CROWDSOURCE D ENTERPRIZE 9/30/2013 2nd Qualinet Summer School 34 ANIKET KITTUR ET AL. THE FUTURE OF CROWD WORK. CSCW 2013.
  • 35. THE FUTURE OF CROWD WORK Reputation system for workers More than financial incentives Recognize worker potential (badges) • Paid for their expertise 9/30/2013 2nd Qualinet Summer School 35 Train less skilled workers (tutoring system)
  • 36. 9/30/2013 2nd Qualinet Summer School 36 CONCLUSIONS
  • 37. CONCLUSIONS • Designing crowdsourcing projects remains a challenge due to • Multitude of approaches and their applicability to domains and tasks • Costs and expertise required to run user-centered application design (in open environments) • Limited insight into motivation and incentives of popular platforms • Factors relevant for the study and influence of user behavior well-studied in social sciences and economics • Can be applied as shown in the examples today to refine incentive schemes and rewards • Many other useful tools available (not covered in this tutorial) 9/30/2013 2nd Qualinet Summer School 37 • Machine learning for quality assessment • HCI