We start with the laboratory experiment with University students Doing the experiment in experimental laboratory requires us to provide incentives to students to participate. They are not our friends that give us a favor to test the software or students in our course that earn course credit – it is not what we want. We want subjects that are neutral to us and to the task and that will reply only to the incentive structure that we provide. There is a fee that we need to pay to students no matter what just to maintain reputation of the laboratory to be sure that students will keep coming to the experiments organized also by other researchers. You can think of environments where you can run the test and don’t pay the participation fee but offer only the flexible part (Mechanicle Turk?) Don’t look at the €5 but concentrate on the flexible part of the payment
(Leon Festinger, 1954) (Bram Buunk and Thomas Mussweiler 2001, Jerry Suls, Rene Martin, and Ladd Wheeler 2002), (Solomon E. Asch 1956, George A. Akerlof 1980, Stephen R. G. Jones 1984, Douglas Bernheim 1994).
SemTech 2012 - Making your semantic app addictive: Incentivizing Users
Making Your SemanticApplication Addictive: Incentivizing Users Roberta Cuel Univeristy of Trento (Italy) – KIT (Germany) firstname.lastname@example.org – email@example.com
Topics of the session• The role of human contributions in the creation of semantic descriptions of digital artifacts.• Methods and principles for the design of incentives-compatible semantic-annotation technology.• Case studies: • TID: Telefónica R&D corporate knowledge • “Taste it! Try it” mobile app
Semantic content authoring• Rely on human inputs: • Modeling a domain. • Understanding text and media content • Integrating data sources originating from different contexts • …• Motivating users to contribute is essential for semantic technologies to reach critical mass and ensure sustainable growth.• Realize incentivized semantic applications.
What is the secret to sustainable success?• Offer solution to a real problem: right solution at the right time – at least 50% of success
Our approach:Ideally: field desk lab fieldA procedural ordering of methods todevelop incentive compatible applications
Motivations in the Web 2.0 • Motivation and incentives – Reciprocity – Reputation – Competition – Altruism – Self-esteem – Fun – Money2/23/11 2
Theories of motivation (latin move)Performance : f (ability*motivation) Content theories of motivationIncentives Motivation Performance •Need theories •Herzberg’s “two factor” theory •McClelland’s achievement-< power-affiliation theory Job characteristic approach (Skill variety, autonomy, .. )Psychological meaning: Process Theories of motivationinternal mental state pertaining to:-initiation, -Reinforcement theory-direction, -Goal setting theory-persistence, -Expectancy theory-intensity and -Organizational justice theory,-termination of behavior -…, …, ...
The incentive analytical tool Nature of Social Goal Tasks good being Structure producedCommunication High High level (about Medium Medium Variety of the goal of Public good the tasks) Low Low Hierarchy (non-rival Participation High High neutral non- level (in the exclusive) Medium Specificity of Medium definition of the goal) Low Low Identification High High with Low Private good Clarity level Highly Hierarchical (rival, Required specific exclusive) Low skills Trivial Common
TID: Telefónica R&D corporate knowledge “Taste it! Try it” mobile app for reviewing restaurant and other PoITWO CASE STUDIES
Enterprise KnowledgeManagement @ TID - Spain • Services of the intranet portal • Document management • Corporate directories • Pilot/Product/Service catalogues • News • Bank of ideas • Blogs, wikis, forums • Search engines • Some info • 1200 employees in 7 cities and 3 countries (↑) • ˜3050 visits per day, ˜56000 page views (impressions) per day, average visit time: 20’
Field and domain analysisDomain analysis•Site visit, semi-structured, qualitative interviews (Communication processes,Existing usage practices, problems, tools/solutions) • Tape recording, transcription • Data analysis per ex-post categorization•Focus group discussion • Usability lab tests and Expert walkthroughs•Lab experiment • Two payments•Field experiment • Natural vs. semantic annotation
The incentive analytical tool and TID motivationsWe need to design the “game” in a way that permits toachieve the outcome in annotations but does not distruct toomuch employees from their main job
The Mechanism design exercise in our case study (I)Interplay of two alternative games: • Principal agent game • No tools to check employees perform at their best • Management can implement various incentives: • Piece rate wages (labour intensive tasks) • Performance measurement (all levels of tasks) • Tournaments (internal labour market) • Public goods game • Semantic content creation is a public good (non-excludable and non-rival) • The problem of free riding
… harnessing the networks and reputation effects• Competitive environment• Internal market of labour• Reputation in terms of expertise)• HR Department should be involved
Field experimentReal users and tasksshould have – practical usefulness for users (search): – social implications, providing information about people, and their performances
• 2761 annotations, Some results • 82% are semantic Social rewards are as strong as monetary rewards! (Man Whitney test ) Competition SocialCompetition: 200€ Number of annotation 1589 1172 % of semantic annotation 88,92% 71,84 Maximum number of annotation 439 262 Annotation of free text 180 326 Social: daily contributor on Yammer
Taste it! Try it!Goals of the tool: • provide semantically-enabled reviewsFeatures • sufficiently easy to create for end-user acceptance • keep a user entertained - Facebook and badges • offer the personalized, semantic, context-aware recommendation processResearch context: (ontology-based) collaborative filtering and user clustering,structuring and disambiguation of the reviews by using domain knowledge andincentives
ExperimentHypothesis:•Points vs. badge•No information about others vs. information•No information about herself vs. information(6 groups) x (~ 25 students) = ~150 students•Group 0: Points, Piece vise, no info on others private info, web based•Group 1: Points, piece vise, median, public info•Group 2: Points, piece vise, neighborhood, public info•Group 3: Badge, piece vise, no info on others private info, web based•Group 4: Badge, piece vise, median, public info - treatment•Group 5. Badge, piece vise, neighborhood, public info - treatmentPoints: max. 8 for creating reviews and 2 points for filling in the questionnaire
Average number Average number Average number Average Score reviews semantic annotation Average time of actions *10Group 0 7,4223 11,41 4,41 6,6 4,85Group 1 7,4904 12,08 3,76 5,26 5,71Group 2 10,3607 15,44 7,26 4,83 7,14Group 3 7,6246 12,08 4,98 4,26 10,42Group 4 7,7612 12,32 4,48 6,46 8,24Group 5 8,1615 12 5,87 5,58 11,51As proposed in game mechanics (showing the neighborhood performance) is more effective than the median story that is now the "top" at least in published economics papers ;-)
Any question? Thank you Roberta Cuel University of Trento & KIT firstname.lastname@example.org