Insemtives at the KIT


Published on

Published in: Business, Economy & Finance
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Nash and travelling salesman
  • Nash and travelling salesman
  • We start with the laboratory experiment with University students Doing the experiment in experimental laboratory requires us to provide incentives to students to participate. They are not our friends that give us a favor to test the software or students in our course that earn course credit – it is not what we want. We want subjects that are neutral to us and to the task and that will reply only to the incentive structure that we provide. There is a fee that we need to pay to students no matter what just to maintain reputation of the laboratory to be sure that students will keep coming to the experiments organized also by other researchers. You can think of environments where you can run the test and don’t pay the participation fee but offer only the flexible part (Mechanicle Turk?) Don’t look at the €5 but concentrate on the flexible part of the payment
  • 36 students – randomly assigned, no previous experience required, no knowledge of the tool We used the Telefonica annotation application to annotate images Notice, however, that for the first testing of our incentive system we do not really need the real tool. We can also do it with some other task that can be percieved as similar in terms of effort.
  • T-student = 2.58, p-value = 0.0089 assuming unequal variance F-test for the significance of the difference between the Variances of the two samples F=2.34, p-value = 0,042
  • (Leon Festinger, 1954) (Bram Buunk and Thomas Mussweiler 2001, Jerry Suls, Rene Martin, and Ladd Wheeler 2002), (Solomon E. Asch 1956, George A. Akerlof 1980, Stephen R. G. Jones 1984, Douglas Bernheim 1994).
  • (Leon Festinger, 1954) (Bram Buunk and Thomas Mussweiler 2001, Jerry Suls, Rene Martin, and Ladd Wheeler 2002), (Solomon E. Asch 1956, George A. Akerlof 1980, Stephen R. G. Jones 1984, Douglas Bernheim 1994).
  • Insemtives at the KIT

    1. 1. Web 2.0 and incentives for human-driven contributions Roberta Cuel Univeristy of Trento and KIT roberta.cuel [email_address]
    2. 4. Motivations in the Web 2.0 <ul><li>Motivation and incentives </li></ul><ul><ul><li>Reciprocity </li></ul></ul><ul><ul><li>Reputation </li></ul></ul><ul><ul><li>Competition </li></ul></ul><ul><ul><li>Altruism </li></ul></ul><ul><ul><li>Self-esteem </li></ul></ul><ul><ul><li>Fun </li></ul></ul><ul><ul><li>Money </li></ul></ul>2/23/11 2 2/23/11 2
    3. 5. Motivation <ul><li>Basic tenets of organizational behavior: </li></ul><ul><ul><li>Performance : f (ability*motivation) </li></ul></ul><ul><ul><li>Incentives  Motivation  Performance </li></ul></ul> <ul><ul><li>Performance components: </li></ul></ul><ul><ul><li>Task (abilities needed to get the job done) </li></ul></ul><ul><ul><li>Contextual (abilities to get the job done here and now  organizational citizenship behavior or prosocial organizational behaviors (altruism, politeness, etc.)) </li></ul></ul><ul><ul><li>Ethical (ability to do the “right” thing) </li></ul></ul><ul><ul><li>Technology (skill vs. standardization) </li></ul></ul>
    4. 6. What is motivation? <ul><li>Etymology: motivation  latin movere  “move” </li></ul><ul><li>Psychological meaning: internal mental state pertaining to initiation, direction, persistence, intensity and termination of behavior </li></ul><ul><li>Managerial meaning: activity implemented to induce others to produce results </li></ul><ul><li>So we will deal both with reasons for behavior and processes that cause those behaviors </li></ul><ul><ul><li>Content theories of motivation </li></ul></ul><ul><ul><li>Process theories of motivation </li></ul></ul>
    5. 7. Content theories of motivation <ul><li>Need theories </li></ul><ul><ul><li>People will act to satisfy a gap between the present condition and a desired state </li></ul></ul><ul><ul><li>Maslow identifies 5 kinds of need : Physiological, Safety, Belonging, Esteem, Self-actualization </li></ul></ul><ul><ul><li>ERG identifies 3 kinds of need: Existence (physiological+security), Relatedness (that regard relationships and belonging), Growth (Esteem and self-actualization) </li></ul></ul><ul><li>Herzberg’s “two factor” theory </li></ul><ul><ul><li>Hygiene factors : create dissatisfaction if they are not present (e.g. salary, job security, relationship with supervisor ), do not motivate per se </li></ul></ul><ul><ul><li>Motivators: create satisfaction when they are present and thus willingness to work harder (e.g. responsability, achievement, work itself, growth possibilities) </li></ul></ul><ul><li>McClelland’s achievement-power-affiliation theory </li></ul><ul><ul><li>“ Achievement - oriented people”: prefer own effort, direct feedback, medium risk situations </li></ul></ul><ul><ul><li>“ Power -oriented people”: prefer control over others’ effort, elicit strong emotions, concern for reputation </li></ul></ul><ul><ul><li>Low- high affiliation needs make people more competition vs. cooperation oriented. </li></ul></ul><ul><li>Job characteristic approach </li></ul>
    6. 8. Job characteristics approach <ul><li>Principle: nature of the work affects motivation and performance </li></ul>Core job dimensions Critical psychological states Personal and work outcomes Feedback Autonomy Skill variety Task identity Task significance Meaningfulness of work Responsibility for work outcomes Knowledge on results Higher internal work motivation High quality work performance High satisfaction Low absenteeism
    7. 9. Process theories on motivation <ul><li>Reinforcement theory </li></ul><ul><ul><li>Positive and negative reinforcement </li></ul></ul><ul><ul><li>Punishment or extinction (no rewards): Communication processes reduces, Interpersonal tension increases, Exit strategies </li></ul></ul><ul><li>Goal-setting theory: Specific goals > better results </li></ul><ul><li>Group discussion clarifying the goal </li></ul><ul><ul><li>Expectancy theory ( Effort-performance or Performance-outcome expectancy ) </li></ul></ul><ul><ul><li>Organizational justice theory (Distributive justice (fair treatment with respect to effort-reward balance) or procedural justice (fair treatment in terms of how decisions are made about things that affect them in the workplace)) </li></ul></ul>Frustration Low Performance high Low difficulty of a goal imp.ble
    8. 10. Porter and Lawler model Value of rewards Perceived effort -reward probability Effort Performance Abilities Organizational context Extrinsic rewards Intrinsic rewards Satisfaction Equity perception
    9. 11. Types of motivations Structure Person <ul><ul><li>Example: FLOSS software (Ghosh & Prakash, in Lerner & Tirole, 2005) </li></ul></ul>Motivations Internal (embedded in structure, e.g., task, tools) External (additional to structure, external re-inforcements) Intrinsic (predispositioned in person, e.g., drives, needs, desires ) Fun, joy, gaming, interest, satisfaction, self-actualization, self-re-inforcement Social appreciation, reputation, love, trust, social capital, community support Extrinsic (additional to personal predispositions, extern re-inforcements ) Usability, sociability, Design-for-fun, curiosity, community-building support Material/financial capital, money, rewards, prices, medals, credit points
    10. 12. Intrinsic / Extrinsic motivations Kaufman, Schulze, Veit (Mannheim University)
    11. 13. Game theory <ul><li>Game theory is a formal way to analyze interaction among a number of rational agents who behave strategically </li></ul><ul><ul><li>The rational agents : players involved in the situation (best choice) </li></ul></ul><ul><ul><li>A number of players : more than one </li></ul></ul><ul><ul><li>Rationality/payoffs : what are the players’ preferences over the outcomes of the game </li></ul></ul><ul><ul><li>The interactions : one player’s behavior affects another </li></ul></ul><ul><ul><li>The rules : who moves when, what do they know, what can they do </li></ul></ul><ul><ul><li>The outcomes : what is the outcome of the game (for each move) </li></ul></ul>
    12. 14. Mechanism design <ul><li>Mechanism design is about how to translate game theory in effective behavior </li></ul><ul><ul><li>To design rules such that a desired set of outcomes happens </li></ul></ul><ul><ul><li>Alignment of interests between parties and production of maximum social welfare </li></ul></ul><ul><li>A selection of relevant variables </li></ul><ul><ul><li>The goal: the goal of the software and its relation with goals and interests of its users </li></ul></ul><ul><ul><li>The tasks: how much effort the task requires from the user and which competences and abilities are required to carry on the task </li></ul></ul><ul><ul><li>The social structure: a stylized and simplified set of social relationships among the users. </li></ul></ul><ul><ul><li>The nature of good: who can benefit from individual contributions </li></ul></ul>
    13. 15. The incentive context matrix
    14. 16. Motivations: another study In the case of MTurk
    15. 17. Design systems fostering users to provide good quality content <ul><li>Ideally: field  desk  lab  field </li></ul><ul><li>Analyze the domain and find yourselves in the matrixes </li></ul><ul><ul><ul><li>Find the relevant point of that situation (goal and tasks) </li></ul></ul></ul><ul><ul><ul><li>Focus on a small group of individuals (social structure) </li></ul></ul></ul><ul><ul><ul><li>Analyze their motivation (internal/exterrnal intrinsic/extrinsic) </li></ul></ul></ul><ul><ul><ul><li>Analyze the other relevant variables (nature of good being produced, kill variety/level) </li></ul></ul></ul><ul><li>Design a simplest possible model that can effectively support contributors </li></ul><ul><li>Test and get feedback </li></ul><ul><li>Fine-tune the experiment and add other elements </li></ul>
    16. 18. A procedural ordering of methods to develop incentive compatible applications
    17. 19. Workshop and interviews reports <ul><li>Domain observations (second-hand data) </li></ul><ul><li>Ethnography or qualitative face-to-face interviews </li></ul><ul><li>Questionnaires </li></ul><ul><li>Observations with selected individuals </li></ul><ul><li>Quantitative analysis (data collections) </li></ul>
    18. 20. Field experiment <ul><li>As lab experiment, but … </li></ul><ul><li>… real users </li></ul><ul><li>… real context </li></ul><ul><li>… real tasks </li></ul><ul><li>… incentives have work related consequences </li></ul><ul><li>… evaluation is conducted on these consequences </li></ul>
    19. 21. A CASE STUDY
    20. 22. Telefónica Investigación y Desarrollo (TID - Spain)
    21. 23. Field and domain analysis <ul><li>Domain analysis </li></ul><ul><ul><li>Site visit, semi-structured, qualitative interviews </li></ul></ul><ul><ul><ul><li>Communication processes </li></ul></ul></ul><ul><ul><ul><li>Existing usage practices and problems </li></ul></ul></ul><ul><ul><ul><li>Existing tools/solutions </li></ul></ul></ul><ul><ul><ul><li>Semantic annotation solutions </li></ul></ul></ul><ul><ul><li>Tape recording, transcription </li></ul></ul><ul><ul><li>Data analysis per ex-post categorization </li></ul></ul><ul><li>Focus group discussion </li></ul><ul><ul><li>Usability lab tests </li></ul></ul><ul><ul><li>Expert walkthroughs </li></ul></ul>
    22. 24. Workshop and interviews report <ul><li>Various styles </li></ul><ul><ul><li>In house and remote workers </li></ul></ul><ul><ul><li>Composition of groups </li></ul></ul><ul><ul><ul><li>In some groups people are working together for long time </li></ul></ul></ul><ul><ul><ul><li>Some other projects started only few months ago </li></ul></ul></ul><ul><li>People are sharing docs only inside their group </li></ul><ul><ul><li>Directory is more than enough </li></ul></ul><ul><ul><li>No one is looking for information in the portal </li></ul></ul><ul><ul><li>They use networks to </li></ul></ul><ul><ul><ul><li>Get information </li></ul></ul></ul><ul><ul><ul><li>Solve problems </li></ul></ul></ul><ul><li>Strongly hierarchical organization </li></ul><ul><ul><li>Personal benefits </li></ul></ul><ul><ul><li>Control is an issue </li></ul></ul><ul><ul><li>Reasons for annotation: Are mandatory (quality); Altruism; Fun; </li></ul></ul>
    23. 25. <ul><ul><li>Social structure (various structures co-exist) </li></ul></ul><ul><ul><ul><li>Strongly hierarchical organization (control is an issue) </li></ul></ul></ul><ul><ul><ul><li>Working groups and community of experts </li></ul></ul></ul><ul><ul><li>Nature of good: public good vs. private, club goods </li></ul></ul><ul><ul><li>Skill variety/level: Skilled ability (knowledge workers) </li></ul></ul><ul><ul><li>Motivations: fun, visibility, reputation, promotion, money </li></ul></ul>The TID matrix & motivations
    24. 26. Why mechanism design? <ul><li>The case of TID: </li></ul><ul><ul><li>Annotation is a public good </li></ul></ul><ul><ul><li>Free riding problem </li></ul></ul><ul><ul><li>The problem of efficient allocation of time between main task and annotating activity </li></ul></ul><ul><li>We need to design the “game” in a way that permits to achieve the outcome in annotations but does not distruct too much employees from their main job </li></ul>
    25. 27. The prototype creation
    26. 28. PD workshops and HCI analysis
    27. 29. Lab experiment <ul><li>Test two rewarding/incentives systems </li></ul><ul><li>Pay per click: </li></ul><ul><ul><li>0,03 € per tag added (up to 3 € maximum). </li></ul></ul><ul><li>Winner takes all model: </li></ul><ul><ul><li>The person who adds the higher number of tags/annotation wins 20€ </li></ul></ul><ul><li>(Participation fee – 5 €) </li></ul>
    28. 30. The experiment (setting) <ul><li>36 students ( Random assignment to the two “treatments”) </li></ul><ul><li>Individual task: annotation of images </li></ul><ul><li>Clear set of Instructions </li></ul><ul><li>Training (guided) session to give basic understanding of annotation tool </li></ul><ul><li>8 minutes clocked session (time pressure) </li></ul><ul><li>Goal: produce maximum amount of tags in allotted time on a random set of images </li></ul>
    29. 31. The lab 36
    30. 32. The experiment: screenshots
    31. 33. 2/23/11 38
    32. 34. 2/23/11 39
    33. 35. 2/23/11 40
    34. 36. 2/23/11 41
    35. 37. First results 2/23/11 42
    36. 38. First results In WTA treatment, 76 % of subjects make more annotations than the average number of annotations in PPT scenario. Anyway, there are several subjects in both scenarios that make few annotations. This suggests that WTA incentive system does not provide the same incentives to all subjects.
    37. 39. The results 2/23/11 44 3 5 This suggests that WTA scenario spurs subjects to perform at their very best. At the same time, under PPT incentive system subjects seem to be less interested in the final result and tend to coordinate at lower levels of productivity.
    38. 40. Quality of data <ul><li>The number of images annotated by users in both treatments tend to be the same, subjects annotate 11 images on average. </li></ul><ul><li>PPT </li></ul><ul><ul><li>134 tags repeated only 2 times </li></ul></ul><ul><ul><li>437 unique tags </li></ul></ul><ul><li>WTA </li></ul><ul><ul><li>118 tags repeated only 2 times </li></ul></ul><ul><ul><li>390 unique tags </li></ul></ul>
    39. 41. The costs and scalability <ul><li>The real scenario with 19/17 subjects </li></ul><ul><ul><li>PPT scenario = 27.03 € </li></ul></ul><ul><ul><li>WTA scenario = 20 € </li></ul></ul><ul><li>What if 40 subjects or more ? </li></ul><ul><ul><li>PPT scenario = 56 €; 0.03 € per annotation </li></ul></ul><ul><ul><li>WTA scenario = 20 €; 0.008 € per annotation </li></ul></ul><ul><ul><li>Note that the prize in the WTA scenario should be adjusted according to the chances of winning it. </li></ul></ul>
    40. 42. Some biases <ul><li>Students are </li></ul><ul><ul><li>Volunteers who are used participating in experiments </li></ul></ul><ul><ul><li>Strong web users and game players </li></ul></ul><ul><ul><li>Paid to show up </li></ul></ul><ul><li>Quality of the tags </li></ul><ul><ul><li>Quality of tagging has been controlled for: no obvious ‘mistakes’ or ‘cheating’ </li></ul></ul>
    41. 43. Prototype refinement
    42. 44. “ Incentivizing the tool”
    43. 45. Next steps: Telefonica I+D <ul><li>Replicate experiment with real users </li></ul><ul><ul><li>Change 1: task becomes relevant in terms of practical usefulness for users (search): </li></ul></ul><ul><ul><ul><li>Effort directed to producing a good (tags) that are not consumed by users (used to achieve other goals) à change structure of the game to let users exploit tagging to achieve results (treasure hunt!) </li></ul></ul></ul><ul><ul><li>Change 2: task has social implications </li></ul></ul><ul><ul><ul><li>Providing information about contribution by others </li></ul></ul></ul><ul><ul><ul><li>Providing information about relative standing of the user compared to the community </li></ul></ul></ul>
    44. 46. Next steps: Telefonica I+D <ul><li>Replicate experiment with real users </li></ul><ul><ul><li>Change 3: Mimic the social structure in the company: </li></ul></ul><ul><ul><ul><li>Run experiment with teammates </li></ul></ul></ul><ul><ul><ul><li>Use real tasks </li></ul></ul></ul><ul><ul><ul><li>Try alternative pay for performance schemes (users produce tags not to get money but to use tags to perform other tasks) </li></ul></ul></ul><ul><li>Field  Desk  Lab  Field </li></ul>
    45. 47. Example: MovieLens <ul><li> is an online movie recommender system that invites users to rate movies and in return makes personalized recommendations and predictions for movies the user has not already rated. </li></ul><ul><li>Personalized recommendations are based on collaborative filtering technology </li></ul>
    46. 48. Background knowledge: social information might affect behaviors <ul><li>People lean toward social comparisons in situations that are ambiguous </li></ul><ul><li>When information regarding prevalent behavior is available, people exhibit the tendency to copy this behavior , a phenomena referred to as conformity </li></ul><ul><li>We compare ourselves to others who are better off for guidance , and to others who are worse off to increase our self-esteem </li></ul><ul><li>When outcome information regarding other people’s payoffs is available, people show distributional concerns, such as inequality aversion </li></ul>
    47. 49. MovieLens: method of analysis <ul><li>Method: field experiment </li></ul><ul><ul><li>Randomized sample of 398 users with 30 or more reviews </li></ul></ul><ul><ul><li>Random money prize assigned to participants </li></ul></ul><ul><ul><li>Divided into 3 experimental groups </li></ul></ul><ul><ul><ul><li>Personal newsletter 1: median number of contributions (of their cohort) </li></ul></ul></ul><ul><ul><ul><li>Personal newsletter 2: net benefit (benefit-cost in time) of average user in own cohort </li></ul></ul></ul><ul><ul><ul><li>Control with their own ratings </li></ul></ul></ul><ul><li>Stage one: pre-experimental survey to estimate benefits in membership </li></ul>
    48. 50. A procedural ordering of methods to develop incentive compatible applications
    49. 51. Any question? <ul><li>Thank you </li></ul><ul><li>Roberta Cuel </li></ul><ul><li>University of Trento & KIT </li></ul><ul><li>[email_address] </li></ul>