SemTech 2012 - Making your semantic app addictive: Incentivizing Users

494 views
443 views

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
494
On SlideShare
0
From Embeds
0
Number of Embeds
6
Actions
Shares
0
Downloads
7
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Not the whole success
  • We start with the laboratory experiment with University students Doing the experiment in experimental laboratory requires us to provide incentives to students to participate. They are not our friends that give us a favor to test the software or students in our course that earn course credit – it is not what we want. We want subjects that are neutral to us and to the task and that will reply only to the incentive structure that we provide. There is a fee that we need to pay to students no matter what just to maintain reputation of the laboratory to be sure that students will keep coming to the experiments organized also by other researchers. You can think of environments where you can run the test and don’t pay the participation fee but offer only the flexible part (Mechanicle Turk?) Don’t look at the €5 but concentrate on the flexible part of the payment
  • (Leon Festinger, 1954) (Bram Buunk and Thomas Mussweiler 2001, Jerry Suls, Rene Martin, and Ladd Wheeler 2002), (Solomon E. Asch 1956, George A. Akerlof 1980, Stephen R. G. Jones 1984, Douglas Bernheim 1994).
  • SemTech 2012 - Making your semantic app addictive: Incentivizing Users

    1. 1. Making Your SemanticApplication Addictive: Incentivizing Users Roberta Cuel Univeristy of Trento (Italy) – KIT (Germany) roberta.cuel@unitn.it – roberta.cuel@kit.edu
    2. 2. Topics of the session• The role of human contributions in the creation of semantic descriptions of digital artifacts.• Methods and principles for the design of incentives-compatible semantic-annotation technology.• Case studies: • TID: Telefónica R&D corporate knowledge • “Taste it! Try it” mobile app
    3. 3. Semantic content authoring• Rely on human inputs: • Modeling a domain. • Understanding text and media content • Integrating data sources originating from different contexts • …• Motivating users to contribute is essential for semantic technologies to reach critical mass and ensure sustainable growth.• Realize incentivized semantic applications.
    4. 4. What is the secret to sustainable success?• Offer solution to a real problem: right solution at the right time – at least 50% of success
    5. 5. Our approach:Ideally: field  desk  lab  fieldA procedural ordering of methods todevelop incentive compatible applications
    6. 6. Motivations in the Web 2.0 • Motivation and incentives – Reciprocity – Reputation – Competition – Altruism – Self-esteem – Fun – Money2/23/11 2
    7. 7. Intrinsic / Extrinsic motivations Kaufman, Schulze, Veit (Mannheim University)
    8. 8. Theories of motivation (latin move)Performance : f (ability*motivation) Content theories of motivationIncentives  Motivation  Performance •Need theories •Herzberg’s “two factor” theory •McClelland’s achievement-< power-affiliation theory Job characteristic approach (Skill variety, autonomy, .. )Psychological meaning: Process Theories of motivationinternal mental state pertaining to:-initiation, -Reinforcement theory-direction, -Goal setting theory-persistence, -Expectancy theory-intensity and -Organizational justice theory,-termination of behavior -…, …, ...
    9. 9. The incentive analytical tool Nature of Social Goal Tasks good being Structure producedCommunication High High level (about Medium Medium Variety of the goal of Public good the tasks) Low Low Hierarchy (non-rival Participation High High neutral non- level (in the exclusive) Medium Specificity of Medium definition of the goal) Low Low Identification High High with Low Private good Clarity level Highly Hierarchical (rival, Required specific exclusive) Low skills Trivial Common
    10. 10. TID: Telefónica R&D corporate knowledge “Taste it! Try it” mobile app for reviewing restaurant and other PoITWO CASE STUDIES
    11. 11. Enterprise KnowledgeManagement @ TID - Spain • Services of the intranet portal • Document management • Corporate directories • Pilot/Product/Service catalogues • News • Bank of ideas • Blogs, wikis, forums • Search engines • Some info • 1200 employees in 7 cities and 3 countries (↑) • ˜3050 visits per day, ˜56000 page views (impressions) per day, average visit time: 20’
    12. 12. Field and domain analysisDomain analysis•Site visit, semi-structured, qualitative interviews (Communication processes,Existing usage practices, problems, tools/solutions) • Tape recording, transcription • Data analysis per ex-post categorization•Focus group discussion • Usability lab tests and Expert walkthroughs•Lab experiment • Two payments•Field experiment • Natural vs. semantic annotation
    13. 13. The incentive analytical tool and TID motivationsWe need to design the “game” in a way that permits toachieve the outcome in annotations but does not distruct toomuch employees from their main job
    14. 14. The Mechanism design exercise in our case study (I)Interplay of two alternative games: • Principal agent game • No tools to check employees perform at their best • Management can implement various incentives: • Piece rate wages (labour intensive tasks) • Performance measurement (all levels of tasks) • Tournaments (internal labour market) • Public goods game • Semantic content creation is a public good (non-excludable and non-rival) • The problem of free riding
    15. 15. The prototype creation
    16. 16. PD workshops and HCI analysis
    17. 17. Lab experiment36 studentsIndividual task: annotation of imagesTime: 8 minsTwo rewarding/incentives systems•Pay per click: 0,03 € per tag•Winner takes all model: 20€
    18. 18. 2/23/11 www.insemtives.eu 39
    19. 19. 2/23/11 www.insemtives.eu 40
    20. 20. 2/23/11 www.insemtives.eu 41
    21. 21. Some resultsIn WTA treatment, 76 % of subjectsmake more annotations than theaverage number of annotations inPPT scenario.
    22. 22. Prototype refinement
    23. 23. Incentivizing the tool …making it fun
    24. 24. … harnessing the networks and reputation effects• Competitive environment• Internal market of labour• Reputation in terms of expertise)• HR Department should be involved
    25. 25. Field experimentReal users and tasksshould have – practical usefulness for users (search): – social implications, providing information about people, and their performances
    26. 26. • 2761 annotations, Some results • 82% are semantic Social rewards are as strong as monetary rewards! (Man Whitney test ) Competition SocialCompetition: 200€ Number of annotation 1589 1172 % of semantic annotation 88,92% 71,84 Maximum number of annotation 439 262 Annotation of free text 180 326 Social: daily contributor on Yammer
    27. 27. Taste it! Try it!Goals of the tool: • provide semantically-enabled reviewsFeatures • sufficiently easy to create for end-user acceptance • keep a user entertained - Facebook and badges • offer the personalized, semantic, context-aware recommendation processResearch context: (ontology-based) collaborative filtering and user clustering,structuring and disambiguation of the reviews by using domain knowledge andincentives
    28. 28. The applicationBadges A scenario
    29. 29. ExperimentHypothesis:•Points vs. badge•No information about others vs. information•No information about herself vs. information(6 groups) x (~ 25 students) = ~150 students•Group 0: Points, Piece vise, no info on others private info, web based•Group 1: Points, piece vise, median, public info•Group 2: Points, piece vise, neighborhood, public info•Group 3: Badge, piece vise, no info on others private info, web based•Group 4: Badge, piece vise, median, public info - treatment•Group 5. Badge, piece vise, neighborhood, public info - treatmentPoints: max. 8 for creating reviews and 2 points for filling in the questionnaire
    30. 30. Average number Average number Average number Average Score reviews semantic annotation Average time of actions *10Group 0 7,4223 11,41 4,41 6,6 4,85Group 1 7,4904 12,08 3,76 5,26 5,71Group 2 10,3607 15,44 7,26 4,83 7,14Group 3 7,6246 12,08 4,98 4,26 10,42Group 4 7,7612 12,32 4,48 6,46 8,24Group 5 8,1615 12 5,87 5,58 11,51As proposed in game mechanics (showing the neighborhood performance) is more effective than the median story that is now the "top" at least in published economics papers ;-)
    31. 31. Any question? Thank you Roberta Cuel University of Trento & KIT roberta.cuel@unitn.it

    ×