Successfully reported this slideshow.

Evaluation of eLearning

1,942 views

Published on

Moving beyond level 1 and level 2 evaluations. Using usability methods to improve elearning.

Published in: Education, Technology
  • Be the first to comment

Evaluation of eLearning

  1. 1. Evaluation of eLearning<br />Michael M. Grant, PhD<br />Michael M. Grant 2010<br />
  2. 2.
  3. 3. Kirkpatrick’s Levels<br />the investment of the training compared<br />to its relative benefits to the organization<br />and/or productivity/revenue<br />Level 5:ROI<br />2.1%<br />7.6%<br />22.9%<br />53.9%<br />91.3%<br />(ASTD, 2005)<br />
  4. 4. Kirkpatrick (& Phillips) Model<br />17.9%<br />92%<br />(ASTD, 2009)<br />
  5. 5. Formative Evaluation<br />What’s the purpose?<br />
  6. 6. A focus on improvement during development.<br />
  7. 7. Level 2 Evaluations<br />Appeal<br />Effectiveness<br />Efficiency<br />
  8. 8. Data Collection Matrix<br />
  9. 9. “Vote early and often.”<br />The sooner formative evaluation is conducted during development, the more likely that substantive improvements will be made and costly errors avoided. (Reeves & Hedberg, 2003, p. 142)<br />
  10. 10.
  11. 11. “Experts are anyone with specialized knowledge that is relevant to the design of your ILE.”<br />(Reeves & Hedberg, 2003, p. 145)<br />
  12. 12. Expert Review<br />
  13. 13. Interface Review Guidelines<br />from http://it.coe.uga.edu/~treeves/edit8350/UIRF.html<br />
  14. 14. User Review<br />Observations from one-on-ones and small groups<br />
  15. 15. What Is Usability?<br />
  16. 16. The most common user action on a Web site is to flee.”<br />— Edward Tufte<br />
  17. 17. “at least 90% of all commercial Web sites are overly difficult to use….the average outcome of Web usability studies is that test users fail when they try to perform a test task on the Web. Thus, when you try something new on the Web, the expected outcome is failure.”<br />— Jakob Nielsen<br />
  18. 18. Nielsen’s Web Usability Rules<br />Visibility of system status<br />Match between system and real world<br />User control and freedom<br />Consistency and standards<br />Error prevention<br />Recognition rather than recall<br />Flexibility and efficiency of use<br />Help users recognize, diagnose, and recover from errors<br />Help and documentation<br />Aesthetic and minimalist design<br />
  19. 19. Ease of learning - How fast can a user who has never seen the user interface before learn it sufficiently well to accomplish basic tasks?<br />Efficiency of use - Once an experienced user has learned to use the system, how fast can he or she accomplish tasks?<br />Memorability - If a user has used the system before, can he or she remember enough to use it effectively the next time or does the user have to start over again learning everything?<br />Error frequency and severity - How often do users make errors while using the system, how serious are these errors, and how do users recover from these errors?<br />Subjective satisfaction - How much does the user like using the system?<br />
  20. 20. Two Major Methods to Evaluate Usability<br />
  21. 21. Heuristic Evaluation Process<br />Several experts individually compare a product to a set of usability heuristics<br />Violations of the heuristics are evaluated for their severity and extent suggested solutions<br />At a group meeting, violation reports are categorized and assigned<br />average severity ratings, extents, heuristics violated, description of opportunity for improvement<br />
  22. 22. Heuristic Evaluation Comparisons<br />Advantages<br />Quick: Do not need to find or schedule users<br />Easy to review problem areas many times<br />Inexpensive: No fancy equipment<br />Disadvantages<br />Validity: No users involved<br />Finds fewer problems (40-60% less??)<br />Getting good experts<br />Building consensus with experts<br />
  23. 23. Heuristic Evaluation Report<br />
  24. 24. Heuristic Evaluation Report<br />
  25. 25. User TEsting<br />
  26. 26. User Testing<br />People whose characteristics (or profiles) match those of the Web site’s target audience perform a sequence of typical tasks using the site.<br />Examines:<br />Ease of learning<br />Speed of task performance<br />Error rates<br />User satisfaction<br />User retention over time<br />
  27. 27. Image from (nz)dave at http://www.flickr.com/photos/nzdave/491411546/<br />
  28. 28. Elements of User Testing<br />Define target users<br />Have users perform representative tasks<br />Observe users<br />Report results<br />
  29. 29. Why Multiple Evaluators?<br />Single evaluator achieves poor results<br />Only finds about 35% of usability problems<br />5 evaluators find more than 75%<br />
  30. 30. Why only 5 Users?<br />(Nielsen, 2000)<br />
  31. 31. Reporting User Testing<br />Overall goals/objectives<br />Methodology<br />Target profile<br />Testing outline with test script<br />Specific task list to perform<br />Data analysis & results<br />Recommendations<br />
  32. 32. Recent Methods for User Testing<br />
  33. 33.
  34. 34.
  35. 35.
  36. 36.
  37. 37.
  38. 38.
  39. 39.
  40. 40.
  41. 41. 10 Second Usability Test<br />Disable stylesheets<br />Check for the following:<br />Semantic markup<br />Logical organization<br />Only images related to content appear<br />
  42. 42. Alpha, Beta & Field Testing<br />Akin to prototyping<br />
  43. 43. References & Acknolwedgements<br />American Society for Training & Development. (2009). The value of evaluation: Making training evaluations more effective. Author. <br />Follett, A. (2009, October 9). 10 qualitative tools to improve your web site. Instant Shift. Retrieved March 18, 2010 from http://www.instantshift.com/2009/10/08/10-qualitative-tools-to-improve-your-website/<br />Nielsen, J. (2000, March 19). Why you only need to test with 5 users. Jakob Nielsen’s Alertbox. Retrieved from http://www.useit.com/alertbox/20000319.html<br />Reeves, T.C. (2004, December 9). Design research for advancing the integration of digital technologies into teaching and learning: Developing and evaluating educational interventions. Paper presented to the Columbia Center for New Media Teaching and Learning, New York, NY. Available at http://ccnmtl.columbia.edu/seminars/reeves/CCNMTLFormative.ppt<br />Reeves, T.C. & Hedberg, J.C. (2003). Interactive learning systems evaluation. Englewood Cliffs, NJ: Educational Technology Publications.<br />
  44. 44. Michael M. Grant 2010<br />

×