Insemtives iswc2011 session1
Upcoming SlideShare
Loading in...5
×
 

Insemtives iswc2011 session1

on

  • 766 views

Slides of the Insemtives tutorial at the ISWC 2011 in Bonn, Germany. More slides available at http://www.insemtives.eu/iswc2011-tutorial/

Slides of the Insemtives tutorial at the ISWC 2011 in Bonn, Germany. More slides available at http://www.insemtives.eu/iswc2011-tutorial/

Statistics

Views

Total Views
766
Views on SlideShare
766
Embed Views
0

Actions

Likes
0
Downloads
2
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Insemtives iswc2011 session1 Insemtives iswc2011 session1 Presentation Transcript

  • Ten ways to make your semantic app addicted - REVISITED Elena Simperl Tutorial at the ISWC2011, Bonn, Germany10/24/2011 www.insemtives.eu 1
  • Executive summary• Many aspects of semantic content authoring naturally rely on human contribution.• Motivating users to contribute is essential for semantic technologies to reach critical mass and ensure sustainable growth.• This tutorial is about – Methods and techniques to study incentives and motivators applicable to semantic content authoring scenarios. – How to implement the results of such studies through technology design, usability engineering, and game mechanics. www.insemtives.eu 2
  • Incentives and motivators• Motivation is the driving • Incentives can be related force that makes humans to both extrinsic and achieve their goals. intrinsic motivations.• Incentives are ‘rewards’ • Extrinsic motivation if assigned by an external task is considered boring, ‘judge’ to a performer for dangerous, useless, undertaking a specific socially undesirable, task. dislikable by the – Common belief (among performer. economists): incentives • Intrinsic motivation is can be translated into a sum of money for all driven by an interest or practical purposes. enjoyment in the task itself.
  • Examples of applications www.insemtives.eu 4
  • Extrinsic vs intrinsic motivations• Successful volunteer crowdsourcing is difficult to predict or replicate. – Highly context-specific. – Not applicable to arbitrary tasks.• Reward models often easier to study and control.* – Different models: pay-per-time, pay-per-unit, winner- takes-it-all… – Not always easy to abstract from social aspects (free- riding, social pressure…). – May undermine intrinsic motivation. * in cases when performance can be reliably measured
  • Examples (ii)Mason & Watts: Financial incentives and the performance of the crowds, HCOMP 2009.
  • Amazon‘s Mechanical Turk • Types of tasks: transcription, classification, and content generation, data collection, image tagging, website feedback, usability tests.* • Increasingly used by academia. • Vertical solutions built on top. • Research on extensions for complex tasks.* http://behind-the-enemy-lines.blogspot.com/2010/10/what-tasks-are-posted-on-mechanical.html
  • Tasks amenable to crowdsourcing• Tasks that are decomposable into simpler tasks that are easy to perform.• Performance is measurable.• No specific skills or expertise are required.
  • Patterns of tasks*• Solving a task • Example: open-scale tasks – Generate answers in Mturk – Find additional information – Generate, then vote. – Improve, edit, fix – Introduce random noise to• Evaluating the results of a identify potential issues in the second step task – Vote for accept/reject Label Correct Vote answers Generate answer – Vote up/down to rank potentially correct answers image or not? – Vote best/top-n results• Flow control – Split the task – Aggregate partial results * „Managing Crowdsourced Human Computation“@WWW2011, Ipeirotis
  • Examples (iii) www.insemtives.eu 10
  • What makes game mechanics successfull?* • Accelerated feedback cycles. – Annual performance appraisals vs immediate feedback to maintain engagement. • Clear goals and rules of play. – Players feel empowered to achieve goals vs fuzzy, complex system of rules in real-world. • Compelling narrative. – Gamification builds a narrative that engages players to participate and achieve the goals of the activity. • But in the end it’s about what task users want to get better at.*http://www.gartner.com/it/page.jsp?id=1629214Images from http://gapingvoid.com/2011/06/07/pixie-dust-the-mountain-of-mediocrity/ and http://www.hideandseek.net/wp-content/uploads/2010/10/gamification_badges.jpg
  • Guidelines • Focus on the actual goal and incentivize related actions. – Write posts, create graphics, annotate pictures, reply to customers in a given time… • Build a community around the intended actions. – Reward helping each other in performing the task and interaction. – Reward recruiting new contributors. • Reward repeated actions. – Actions become part of the daily routine.Image from http://t1.gstatic.com/images?q=tbn:ANd9GcSzWEQdtagJy6lxiR2focH2D01Wpz7dzAilDuPsWnL0i4GAHgnm_0hyw3upqw
  • What tasks can be gamified?* • Tasks that are decomposable into simpler tasks, nested tasks. • Performance is measurable. • Obvious rewarding scheme. • Skills can be arranged in a smooth learning curve.*http://www.lostgarden.com/2008/06/what-actitivies-that-can-be-turned-into.htmlImage from http://www.powwownow.co.uk/blog/wp-content/uploads/2011/06/gamification.jpeg
  • What is different about semantic systems?• It‘s still about the context of the actual application.• User engagement with semantic tasks in order to – Ensure knowledge is relevant and up-to-date. – People accept the new solution and understand its benefits. – Avoid cold-start problems. – Optimize maintenance costs.
  • Tasks in knowledge engineering• Definition of vocabulary• Conceptualization – Based on competency questions – Identifying instances, classes, attributes, relationships• Documentation – Labeling and definitions. – Localization• Evaluation and quality assurance – Matching conceptualization to documentation• Alignment• Validating the results of automatic methods www.insemtives.eu 15
  • http://www.ontogame.orghttp://apps.facebook.com/ontogame 16
  • OntoGame API• API that provides several methods that are shared by the OntoGame games, such as: – Different agreement types (e.g. selection agreement). – Input matching (e.g. , majority). – Game modes (multi-player, single player). – Player reliability evaluation. – Player matching (e.g., finding the optimal partner to play). – Resource (i.e., data needed for games) management. – Creating semantic content.• http://insemtives.svn.sourceforge.net/vie wvc/insemtives/generic-gaming-toolkit 10/24/2011 www.insemtives.eu 17
  • OntoGame games10/24/2011 www.insemtives.eu 18
  • Case studies• Methods applied – Mechanism design. – Participatory design. – Games with a purpose. – Crowdsourcing via MTurk.• Semantic content authoring scenarios – Extending and populating an ontology. – Aligning two ontologies. – Annotation of text, media and Web APIs.
  • Lessons learned• Approach is feasible for mainstream domains, where a (large-enough) knowledge corpus is available.• Advertisement is important.• Game design vs useful content. – Reusing well-kwown game paradigms. – Reusing game outcomes and integration in existing workflows and tools.• But, the approach is per design less applicable because – Knowledge-intensive tasks that are not easily nestable. – Repetitive tasks  players‘ retention?• Cost-benefit analysis.
  • Using Mechanical Turk for semantic content authoring• Many design decisions similar to GWAPs. – But clear incentives structures. – How to reliably compare games and MTurk results?• Automatic generation of HITs depending on the types of tasks and inputs.• Integration in productive environments. – Protégé plug-in for managing and using crowdsourcing results.
  • Outline of the tutorialTime Presentation14:00 – Human contributions in semantic content authoring14:4514:45 – Case study: motivating employees to annotate enterprise15:30 content semantically at Telefonica15:30 – Coffee break16:0016:00 – Case study: Crowdsourcing the annotation of dynamic Web16:45 content at seekda16:45 – Case study: Content tagging at MoonZoo and17:30 MyTinyPlanets17:30 – Ten ways to make your semantic app addicted - revisited18:00 www.insemtives.eu 22
  • Realizing the Semantic Web by encouraging millions of end-users to create semantic content.10/24/2011 www.insemtives.eu 23