Insemtives stanford


Published on

Published in: Technology, Education
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Insemtives stanford

  1. 1. Incentives and motivators for collaborative knowledge creation Elena Simperl Talk at the Stanford Center for Biomedical Information, Stanford, CA8/26/2011 1
  2. 2. Insemtives in a nutshell• Many aspects of semantic content authoring naturally rely on human contribution.• Motivating users to contribute is essential for semantic technologies to reach critical mass and ensure sustainable growth.• Insemtives works on – Best practices and guidelines for incentives-compatible technology design. – Enabling technology to realize incentivized semantic applications. – Showcased in three case studies: enterprise knowledge management; services marketplace; multimedia management within virtual worlds. 2
  3. 3. The approach• Typology of semantic content authoring tasks and the ways people could be motivated to address them.• Procedural ordering of methods and techniques to study incentives and motivators applicable to semantic content authoring scenarios.• Guidelines and best practices for the implementation of the results of such studies through participatory design, usability engineering, and mechanism design.• Pilots, showcases and enabling technology. 3
  4. 4. Incentives and motivators• Motivation is the driving • Incentives can be related force that makes humans to both extrinsic and achieve their goals. intrinsic motivations.• Incentives are ‘rewards’ • Extrinsic motivation if assigned by an external task is considered boring, ‘judge’ to a performer for dangerous, useless, undertaking a specific socially undesirable, task. dislikable by the – Common belief (among performer. economists): incentives • Intrinsic motivation is can be translated into a sum of money for all driven by an interest or practical purposes. enjoyment in the task itself.
  5. 5. Examples of applications 5
  6. 6. Extrinsic vs intrinsic motivations• Successful volunteer crowdsourcing is difficult to predict or replicate. – Highly context-specific. – Not applicable to arbitrary tasks.• Reward models often easier to study and control.* – Different models: pay-per-time, pay-per-unit, winner- takes-it-all… – Not always easy to abstract from social aspects (free- riding, social pressure…). – May undermine intrinsic motivation. * in cases when performance can be reliably measured
  7. 7. Examples (ii)Mason & Watts: Financial incentives and the performance of the crowds, HCOMP 2009.
  8. 8. Which tasks can be crowdsourced and how?• Modularity/Divisibility: can • Combinability: group the task be divided into performance smaller chunks? How – Additive: pulling a rope complex is the control flow? (group performs better than individuals, but each How can (intermediary) individual pulls less hard) results be evaluated? – Conjunctive: running in a – Casual games pack (performance is that of – Amazon’s Mturk the weakest member, group – (Software development) size reduces group performance)• Skills and expertise: does – Disjunctive: answering a quiz the task address a broad or (group size increases group an expert audience? performance in term of the – CAPTCHAs time needed to answer) – Casual games 8
  9. 9. Amazon‘s Mechanical Turk • Types of tasks: transcription, classification, and content generation, data collection, image tagging, website feedback, usability tests.* • Increasingly used by academia. • Vertical solutions built on top. • Research on extensions for complex tasks.*
  10. 10. Patterns of tasks*• Solving a task • Example: open-scale tasks – Generate answers in Mturk – Find additional information – Generate, then vote. – Improve, edit, fix – Introduce random noise to• Evaluating the results of a identify potential issues in the second step task – Vote for accept/reject Label Correct Vote answers Generate answer – Vote up/down to rank potentially correct answers image or not? – Vote best/top-n results• Flow control – Split the task – Aggregate partial results * „Managing Crowdsourced Human Computation“@WWW2011, Ipeirotis
  11. 11. Examples (iii) 11
  12. 12. Gamification features*• Accelerated feedback cycles. – Annual performance appraisals vs immediate feedback to maintain engagement.• Clear goals and rules of play. – Players feel empowered to achieve goals vs fuzzy, complex system of rules in real-world.• Compelling narrative. – Gamification builds a narrative that engages players to participate and achieve the goals of the activity. *
  13. 13. What tasks can be gamified?*• Decomposable into simpler tasks.• Nested tasks.• Performance is measurable.• Obvious rewarding scheme.• Skills can be arranged in a smooth learning curve. * Image from
  14. 14. What is different about semantic systems?• Semantic Web tools vs applications. – Intelligent (specialized) Web sites (portals) with improved (local) search based on vocabularies and ontologies. – X2X integration (often combined with Web services). – Knowledge representation, communication and exchange.
  15. 15. What do you want your users to do?• Semantic applications – Context of the actual application. – Need to involve users in knowledge acquisition and engineering tasks? • Incentives are related to organizational and social factors. • Seamless integration of new features.• Semantic tools – Game mechanics. – Paid crowdsourcing (integrated).• Using results of casual games.
  16. 16. Case studies• Methods applied – Mechanism design. – Participatory design. – Games with a purpose. – Crowdsourcing via MTurk.• Semantic content authoring scenarios – Extending and populating an ontology. – Aligning two ontologies. – Annotation of text, media and Web APIs.
  17. 17. Mechanism design in practice• Identify a set of games that represents your situation.• See recommendations in the literature. • Translate what economists do into concrete scenarios. • Assure that the economists’ proposals fit to the concrete situation.• Run user and field experiments. Results influence HCI, social and data management aspects. 8/26/2011 17
  18. 18. Factors affecting mechanism design Social Nature of good Goal Tasks Structure being produced Communication High High level (about the Medium Variety of Medium Private goodgoal of the tasks) Low Low Hierarchy High High neutralParticipation level Medium Medium (in the definition Specificity of Public good of the goal) Low Low Identification High High Common resource with Low Clarity level Hierarchical Highly specific Low Required skills Club good Trivial/Common More at and 8/26/2011 18
  19. 19. Phase 3: OKenterprise annotation tool4/14/11 19
  20. 20. Mechanism design for Telefonica• Interplay of two alternative games – Principal agent game • The management wants employees to do a certain action but does not have tools to check whether employees perform their best effort. • Various mechanisms can be used to align employees’ and employers’ interests – Piece rate wages (labour intensive tasks) – Performance measurement (all levels of tasks) – Tournaments (internal labour market) – Public goods • Semantic content creation is non-rival and non-excludable • The problem of free riding• Additional problem: what is the optimal time and effort for employees to dedicate to annotation4/14/11 20
  21. 21. Mechanism design for Telefonica (ii)• Principal agent game • Public goods game – Pay-per-performance – To let users know that their • Points assigned for each contribution was valuable contribution – The portal should be useful – Quality of performance • Possibility to search experts, measurement documents, etc. • Rate user contributions • Possibility to form groups of • Assign quality reviewers users and share contributions – Tournament – The portal should be easy to • Visibility of contributions by use single users • Search for an expert based on contributions • Experiments • Relative standing compared to – Pay-per-tag vs winner-takes- other users it-all for annotation.4/14/11 21
  22. 22. Tasks in knowledge engineering• Definition of vocabulary• Conceptualization – Based on competency questions – Identifying instances, classes, attributes, relationships• Documentation – Labeling and definitions. – Localization• Evaluation and quality assurance – Matching conceptualization to documentation• Alignment• Validating the results of automatic methods 22
  23. 23. http://www.ontogame.org 23
  24. 24. OntoGame API• API that provides several methods that are shared by the OntoGame games, such as: – Different agreement types (e.g. selection agreement). – Input matching (e.g. , majority). – Game modes (multi-player, single player). – Player reliability evaluation. – Player matching (e.g., finding the optimal partner to play). – Resource (i.e., data needed for games) management. – Creating semantic content.• wvc/insemtives/generic-gaming-toolkit 8/26/2011 24
  25. 25. OntoGame games8/26/2011 25
  26. 26. SEAFish – Annotating images8/26/2011 26
  27. 27. Lessons learned• Approach is feasible for mainstream domains, where a (large-enough) knowledge corpus is available.• Advertisement is important.• Game design vs useful content. – Reusing well-kwown game paradigms. – Reusing game outcomes and integration in existing workflows and tools.• But, the approach is per design less applicable because – Knowledge-intensive tasks that are not easily nestable. – Repetitive tasks  players‘ retention?• Cost-benefit analysis.
  28. 28. Using Mechanical Turk for semantic content authoring• Many design decisions similar to GWAPs. – But clear incentives structures. – How to reliably compare games and MTurk results?• Automatic generation of HITs depending on the types of tasks and inputs.• Integration in productive environments. – Protégé plug-in for managing and using crowdsourcing results.
  29. 29. Thank youe:,
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.