Breaking the Kubernetes Kill Chain: Host Path Mount
SemTech 2012 - Making your semantic app addictive: Incentivizing Users
1. Making Your Semantic
Application Addictive:
Incentivizing Users
Roberta Cuel
Univeristy of Trento (Italy) – KIT (Germany)
roberta.cuel@unitn.it – roberta.cuel@kit.edu
2. Topics of the session
• The role of human contributions in the creation of
semantic descriptions of digital artifacts.
• Methods and principles for the design of
incentives-compatible semantic-annotation
technology.
• Case studies:
• TID: Telefónica R&D corporate knowledge
• “Taste it! Try it” mobile app
3.
4. Semantic content authoring
• Rely on human inputs:
• Modeling a domain.
• Understanding text and media content
• Integrating data sources originating from different
contexts
• …
• Motivating users to contribute is essential for semantic
technologies to reach critical mass and ensure
sustainable growth.
• Realize incentivized semantic applications.
5. What is the secret to
sustainable success?
• Offer solution to a real problem: right solution at the
right time – at least 50% of success
6. Our approach:
Ideally: field desk lab field
A procedural ordering of methods to
develop incentive compatible applications
7. Motivations in the Web 2.0
• Motivation and incentives
– Reciprocity
– Reputation
– Competition
– Altruism
– Self-esteem
– Fun
– Money
2/23/11 2
9. Theories of motivation (latin move)
Performance : f (ability*motivation) Content theories of motivation
Incentives Motivation Performance •Need theories
•Herzberg’s “two factor” theory
•McClelland’s achievement-<
power-affiliation theory
Job characteristic approach
(Skill variety, autonomy, .. )
Psychological meaning: Process Theories of motivation
internal mental state pertaining to:
-initiation, -Reinforcement theory
-direction, -Goal setting theory
-persistence, -Expectancy theory
-intensity and
-Organizational justice theory,
-termination of behavior
-…, …, ...
10. The incentive analytical tool
Nature of
Social
Goal Tasks good being
Structure
produced
Communication High High
level (about Medium Medium
Variety of
the goal of Public good
the tasks) Low Low Hierarchy (non-rival
Participation High High neutral non-
level (in the exclusive)
Medium Specificity of Medium
definition
of the goal) Low Low
Identification High
High
with Low
Private good
Clarity level Highly Hierarchical (rival,
Required specific exclusive)
Low
skills Trivial
Common
11. TID: Telefónica R&D corporate knowledge
“Taste it! Try it” mobile app for reviewing
restaurant and other PoI
TWO CASE STUDIES
12. Enterprise Knowledge
Management @ TID - Spain
• Services of the intranet
portal
• Document management
• Corporate directories
• Pilot/Product/Service
catalogues
• News
• Bank of ideas
• Blogs, wikis, forums
• Search engines
• Some info
• 1200 employees in 7 cities
and 3 countries (↑)
• ˜3050 visits per day, ˜56000
page views (impressions) per
day, average visit time: 20’
13. Field and domain analysis
Domain analysis
•Site visit, semi-structured, qualitative interviews (Communication processes,
Existing usage practices, problems, tools/solutions)
• Tape recording, transcription
• Data analysis per ex-post categorization
•Focus group discussion
• Usability lab tests and Expert walkthroughs
•Lab experiment
• Two payments
•Field experiment
• Natural vs. semantic
annotation
14. The incentive analytical tool and
TID motivations
We need to design the “game” in a way that permits to
achieve the outcome in annotations but does not distruct too
much employees from their main job
15. The Mechanism design exercise in our case
study (I)
Interplay of two alternative games:
• Principal agent game
• No tools to check employees perform at their best
• Management can implement various incentives:
• Piece rate wages (labour intensive tasks)
• Performance measurement (all levels of tasks)
• Tournaments (internal labour market)
• Public goods game
• Semantic content creation is a public good (non-excludable and
non-rival)
• The problem of free riding
18. Lab experiment
36 students
Individual task: annotation of images
Time: 8 mins
Two rewarding/incentives systems
•Pay per click: 0,03 € per tag
•Winner takes all model: 20€
26. … harnessing the networks and
reputation effects
• Competitive environment
• Internal market of labour
• Reputation in terms of expertise)
• HR Department should be involved
27. Field experiment
Real users and tasks
should have
– practical usefulness for
users (search):
– social implications,
providing information about
people, and their
performances
28. • 2761 annotations,
Some results
• 82% are semantic Social rewards are as strong as
monetary rewards! (Man Whitney test )
Competition Social
Competition: 200€ Number of annotation 1589 1172
% of semantic annotation 88,92% 71,84
Maximum number of annotation 439 262
Annotation of free text 180 326
Social: daily contributor
on Yammer
29. Taste it! Try it!
Goals of the tool:
• provide semantically-enabled reviews
Features
• sufficiently easy to create for end-user acceptance
• keep a user entertained - Facebook and badges
• offer the personalized, semantic, context-aware recommendation process
Research context: (ontology-based) collaborative filtering and user clustering,
structuring and disambiguation of the reviews by using domain knowledge and
incentives
31. Experiment
Hypothesis:
•Points vs. badge
•No information about others vs. information
•No information about herself vs. information
(6 groups) x (~ 25 students) = ~150 students
•Group 0: Points, Piece vise, no info on others private info, web based
•Group 1: Points, piece vise, median, public info
•Group 2: Points, piece vise, neighborhood, public info
•Group 3: Badge, piece vise, no info on others private info, web based
•Group 4: Badge, piece vise, median, public info - treatment
•Group 5. Badge, piece vise, neighborhood, public info - treatment
Points: max. 8 for creating reviews and 2 points for filling in the questionnaire
32. Average number Average number Average number
Average Score reviews semantic annotation Average time of actions *10
Group 0 7,4223 11,41 4,41 6,6 4,85
Group 1 7,4904 12,08 3,76 5,26 5,71
Group 2 10,3607 15,44 7,26 4,83 7,14
Group 3 7,6246 12,08 4,98 4,26 10,42
Group 4 7,7612 12,32 4,48 6,46 8,24
Group 5 8,1615 12 5,87 5,58 11,51
As proposed in game mechanics (showing the neighborhood performance) is more effective
than the median story that is now the "top" at least in published economics papers ;-)
33. Any question?
Thank you
Roberta Cuel
University of Trento & KIT
roberta.cuel@unitn.it
Editor's Notes
Not the whole success
We start with the laboratory experiment with University students Doing the experiment in experimental laboratory requires us to provide incentives to students to participate. They are not our friends that give us a favor to test the software or students in our course that earn course credit – it is not what we want. We want subjects that are neutral to us and to the task and that will reply only to the incentive structure that we provide. There is a fee that we need to pay to students no matter what just to maintain reputation of the laboratory to be sure that students will keep coming to the experiments organized also by other researchers. You can think of environments where you can run the test and don’t pay the participation fee but offer only the flexible part (Mechanicle Turk?) Don’t look at the €5 but concentrate on the flexible part of the payment
(Leon Festinger, 1954) (Bram Buunk and Thomas Mussweiler 2001, Jerry Suls, Rene Martin, and Ladd Wheeler 2002), (Solomon E. Asch 1956, George A. Akerlof 1980, Stephen R. G. Jones 1984, Douglas Bernheim 1994).