Successfully reported this slideshow.
Your SlideShare is downloading. ×

ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Upcoming SlideShare
Teacher training material
Teacher training material
Loading in …3
×

Check these out next

1 of 36 Ad

More Related Content

Slideshows for you (20)

Similar to ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final (20)

Advertisement

ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

  1. 1. Rating Scales for Collective Intelligence in Innovation Communities<br />Why Quick and Easy Decision Making Does Not Get it Right<br />> Christoph Riedl<br />IvoBlohm<br />Jan Marco Leimeister<br />Helmut Krcmar<br />
  2. 2. 1. Problem Setting<br />
  3. 3.
  4. 4.
  5. 5.
  6. 6. So, there are large data pools…<br />How do you select the best ideas?<br />
  7. 7. 2. Theory<br />Background<br />
  8. 8.
  9. 9. Motivation<br />Additional Slide<br />(not partofthepresentation)<br /><ul><li>Organizations face information overload and bounded rationality (Berg-Jensen et al. 2010)
  10. 10. Organization’s absorptive capacity is limited (Cohen et al. 1990; Di Gangi et al. 2009)
  11. 11. Idea selection pivotal problem of Open Innovation (Hojer et al. 2010, Piller/Reichwald 2010)</li></li></ul><li>Research Questions<br />Which rating mechanisms perform best for selecting innovation ideas?<br />
  12. 12. Dimensions of Idea Quality<br />An idea‘s originality and innovativeness<br />Ease of transforming an idea into a new product<br />An idea‘s value for the organization<br />An idea‘s concretization and maturity<br />Source: [1, 2, 3]<br />
  13. 13. 3. Research<br />Model<br />
  14. 14. Research Model<br />Judgment Accuracy<br />Rating Scale<br />H1+<br />Rating Satisfaction<br />H2+<br />H1: The granularity of the rating scale positively influences its rating accuracy.<br />H2: The granularity of the rating scale positively influences the users' satisfaction with their ratings.<br />
  15. 15. Research Model<br />User Expertise<br />Judgment Accuracy<br /> H3a<br />Rating Scale<br />H1+<br />Rating Satisfaction<br />H2+<br />H3a: User expertise moderates the relationship between rating scale granularity and rating accuracy such that the positive relationship will be weakened for high levels of user expertise and strengthened for low levels of user expertise. <br />
  16. 16. Research Model<br />User Expertise<br />Judgment Accuracy<br /> H3a<br />H3b<br />Rating Scale<br />H1+<br />Rating Satisfaction<br />H2+<br />H3b: User expertise moderates the relationship between rating scale granularity and rating satisfaction such that the positive relationship will be strengthened for high levels of user expertise and weakened for low levels of user expertise. <br />
  17. 17. Research Methodology<br /><ul><li>Pool of 24 ideas from real-world idea competition
  18. 18. Multi-method study
  19. 19. Web-based experiment
  20. 20. Survey measuring rating satisfaction of participants
  21. 21. Independent expert (N=7) rating of idea quality (based on Consensual Assessment Technique, [1] and [2])</li></li></ul><li>4. Experiment<br />
  22. 22. Participant Demographics<br />N = 313<br />
  23. 23. Participant Demographics<br />
  24. 24. Participant Demographics<br />Additional Slide<br />(not partofthepresentation)<br />
  25. 25. Screenshot of system<br />
  26. 26. Research Design <br />Promote/Demote Rating<br />5Star Rating<br />ComplexRating<br />
  27. 27. So much for the data space and its attributes. Next, we have to think about who our users are and what they want to do. All lifelogging applications are first of all about<br /> 5. Results<br />
  28. 28. Correct Identification of Good and Bad Ideas<br />
  29. 29. Error Identifying Top Ideas as Good and Bottom Ideas as Bad<br />
  30. 30. Rating Accuracy (Fit-Score)<br />
  31. 31. Factor Analysis of Idea Quality<br />Additional Slide<br />(not partofthepresentation)<br />
  32. 32. Participants’ Rating Satisfaction<br />
  33. 33. ANOVA Results<br />N = 313, *** significant with p < 0.001, ** significant with p < 0.01, * significant with p < 0.05<br />
  34. 34. ANOVA Results<br />Post-hoc comparisons:<br />Complex rating scale leads to <br />significantly higher rating accuracy<br /> than <br />promote/demote rating and<br /> 5-star rating (p < 0.001)<br />
  35. 35. Testing Moderating Effects – Recodingof Rating Scales<br />Additional Slide<br />(not partofthepresentation)<br /> Moderators are variables that alter thedirectionorstrengthoftherelationshipbetween a predictorand an outcome<br /><ul><li>Inclusionofinteractiontermsintohierarchicalregressionanalysis
  36. 36. TestingHypotheses 3a and 3b requiresrecodingofratingscaleintodummy variables</li></li></ul><li>Regression Results<br />
  37. 37. Regression Results<br />There is no direct and no moderating effect of user expertise.<br />The scale with the highest rating accuracy / rating satisfaction should be used for all user groups.<br />
  38. 38. Correlations of Expert Rating and Rating Scales<br />Additional Slide<br />(not partofthepresentation)<br />
  39. 39. 6. Contribution<br />
  40. 40. Limitations<br />Expert as base-line<br />Forced choice<br />
  41. 41.
  42. 42. Theory<br /><ul><li>Theory Building</li></ul>– Collective Intelligence<br /><ul><li>Theory Extension – Creativity Research</li></li></ul><li>Practice<br /><ul><li>Design recommendation</li></li></ul><li>Contribution<br />Practice<br /><ul><li>Design recommendation</li></ul>Theory<br /><ul><li>Theory Building – Collective Intelligence
  43. 43. Theory Extension – Creativity Research</li></li></ul><li>Contributions<br />Additional Slide<br />(not partofthepresentation)<br /><ul><li>Theory
  44. 44. Design and test of a model to analyze the influence of the rating scale on rating quality and user satisfaction.
  45. 45. Theory Extension: Creativity Research</li></ul> Developed fit / accuracy measure extends previous research in the area of creativity research correcting for rating error<br /><ul><li>Theory Building: Collective Intelligence</li></ul> Theoretical underpinnings to emerging research stream of “collective intelligence”, no influence of expertise  “Wisdom of Crowds / Collective Intelligence in Innovation Communities does work”<br /><ul><li>Practice
  46. 46. Simple scales have low rating accuracy and low satisfaction</li></ul>  Design recommendations for user rating scales for idea evaluation<br />
  47. 47. Rating Scales for <br />Collective Intelligence in Innovation Communities<br />> Christoph Riedl<br />IvoBlohm<br />Jan Marco Leimeister<br />Helmut Krcmar<br />riedlc@in.tum.de<br />twitter: @criedl<br />
  48. 48. Image credits:<br />Title background: Author collection<br />Starbucks Idea: http://mystarbucksidea.force.com/<br />The Thinker: http://www.flickr.com/photos/tmartin/32010732/<br />Information Overload: http://www.flickr.com/photos/verbeeldingskr8/3638834128/#/<br />Scientists: http://www.flickr.com/photos/marsdd/2986989396/<br />Reading girl: http://www.flickr.com/photos/12392252@N03/2482835894/<br />User: http://blog.mozilla.com/metrics/files/2009/07/voice_of_user2.jpg<br />Male Icon: http://icons.mysitemyway.com/wp-content/gallery/whitewashed-star-patterned-icons-symbols-shapes/131821-whitewashed-star-patterned-icon-symbols-shapes-male-symbol1-sc48.png<br />Harvard University: http://gallery.hd.org/_exhibits/places-and-sights/_more1999/_more05/US-MA-Cambridge-Harvard-University-red-brick-building-sunshine-grass-lawn-students-1-AJHD.jpg<br />Notebook scribbles: http://www.flickr.com/photos/cherryboppy/4812211497/<br />La Cuidad: http://www.flickr.com/photos/37645476@N05/3488148351/<br />Theory and Practice: http://www.flickr.com/photos/arenamontanus/2766579982<br />Papers:<br />[1] Amabile, T. M. (1996). Creativity in Context. Update to Social Psychology of Creativity. 1 edition, Westview Press, Oxford, UK.<br />[2] Blohm, I., Bretschneider, U., Leimeister, J. M. and Krcmar, H. (2010). Does collaboration among participants lead to better ideas in IT-based idea competitions? An empirical investigation. In Proceedings of the 43th Hawaii Internat. Conf. System Sci. p. Kauai, Hawai.<br />[3] Dean, D. L., Hender, J. M., Rodgers, T. L. and Santanen, E. L. (2006). Identifying quality, novel, and creative ideas: Constructs and scales for idea evaluation. Journal of the Association for Information Systems, 7 (10), 646-698.<br />

Editor's Notes

  • Following the open innovation paradigm and using Web 2.0 technologies, large-scale collaboration has been enabledLaunch of online innovation communities
  • Before we dive into the development of our research model, let me give you some theory background
  • More info on experimental design
  • Thank you!

×