Your SlideShare is downloading. ×
Notes on the Evaluation of eLearning<br />Michael M. Grant, PhD<br />Kirkpatrick (& Phillips) Model<br /><ul><li>Level 1: ...
Level 2: Learning — the degree to which learning occurs as a result of the instruction
Level 3: Transfers — the transfer of learning to impact on job behavior
Level 4: Organizational performance — the impact the learning has on the organization
Upcoming SlideShare
Loading in...5
×

Notes on Evaluation of eLearning

1,302

Published on

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,302
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
19
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Transcript of "Notes on Evaluation of eLearning"

  1. 1. Notes on the Evaluation of eLearning<br />Michael M. Grant, PhD<br />Kirkpatrick (& Phillips) Model<br /><ul><li>Level 1: Reaction — participants’ reactions to the instruction
  2. 2. Level 2: Learning — the degree to which learning occurs as a result of the instruction
  3. 3. Level 3: Transfers — the transfer of learning to impact on job behavior
  4. 4. Level 4: Organizational performance — the impact the learning has on the organization
  5. 5. Level 5: ROI — the investment of the training compared to its relative benefits to the organization and/or productivity/revenue</li></ul>Formative Evaluation<br /><ul><li>Overview
  6. 6. What’s the purpose?
  7. 7. A focus on improvement during development.
  8. 8. Level 2 Evaluations
  9. 9. Appeal (Level 1: Reaction)
  10. 10. Effectiveness
  11. 11. Efficiency
  12. 12. Data Collection Matrix</li></ul>Methods1. What are the logistical requirements?2. What are user reactions?3. What are trainer reactions?4. What are expert reactions?5. What corrections must be made?6. What enhancements can be made?Anecdotal recordsXXXXXUser questionnairesXXXXUser interviewsXXXXUser focus groupsXXXUsability observationsXXXXOnline data collectionXXExpert reviewsXXX<br /><ul><li>“Vote early and often.”
  13. 13. The sooner formative evaluation is conducted during development, the more likely that substantive improvements will be made and costly errors avoided (Reeves & Hedberg, 2003, p. 142).
  14. 14. Formative Evaluation Methods (Reeves & Hedberg, 2003)
  15. 15. Expert Review
  16. 16. User Review
  17. 17. Usability Testing
  18. 18. Alpha and beta prototypes and field trials
  19. 19. Expert Review
  20. 20. “Experts are anyone with specialized knowledge that is relevant to the design of your ILE” (Reeves & Hedberg, 2003, p. 145).
  21. 21. SMEs, instructional experts, graphic designers, teachers, and trainers
  22. 22. Interface Review Guidelines from http://it.coe.uga.edu/~treeves/edit8350/UIRF.html
  23. 23. User Review
  24. 24. Observations from one-on-ones and small groups
  25. 25. We need additional information about how the elearning product is being used and how it will perform.</li></ul>What Is Usability?<br /><ul><li>“The most common user action on a Web site is to flee.” —Edward Tufte
  26. 26. “at least 90% of all commercial Web sites are overly difficult to use….the average outcome of Web usability studies is that test users fail when they try to perform a test task on the Web. Thus, when you try something new on the Web, the expected outcome is failure.” —Jakob Nielsen
  27. 27. Nielsen’s Web Usability Rules
  28. 28. Visibility of system status
  29. 29. Match between system and real world
  30. 30. User control and freedom
  31. 31. Consistency and standards
  32. 32. Error prevention
  33. 33. Recognition rather than recall
  34. 34. Flexibility and efficiency of use
  35. 35. Help users recognize, diagnose, and recover from errors
  36. 36. Help and documentation
  37. 37. Aesthetic and minimalist design
  38. 38. Usability.gov
  39. 39. Ease of learning - How fast can a user who has never seen the user interface before learn it sufficiently well to accomplish basic tasks?
  40. 40. Efficiency of use - Once an experienced user has learned to use the system, how fast can he or she accomplish tasks?
  41. 41. Memorability - If a user has used the system before, can he or she remember enough to use it effectively the next time or does the user have to start over again learning everything?
  42. 42. Error frequency and severity - How often do users make errors while using the system, how serious are these errors, and how do users recover from these errors?
  43. 43. Subjective satisfaction - How much does the user like using the system?
  44. 44. Two Major Methods to Evaluate Usability
  45. 45. Heuristic Evaluation Process
  46. 46. Several experts individually compare a product to a set of usability heuristics
  47. 47. Violations of the heuristics are evaluated for their severity and extent suggested solutions
  48. 48. At a group meeting, violation reports are categorized and assigned
  49. 49. average severity ratings, extents, heuristics violated, description of opportunity for improvement
  50. 50. Advantages of Heuristic Evaluation
  51. 51. Quick: Do not need to find or schedule users
  52. 52. Easy to review problem areas many times
  53. 53. Inexpensive: No fancy equipment
  54. 54. Disadvantages of Heuristic Evaluation
  55. 55. Validity: No users involved
  56. 56. Finds fewer problems (40-60% less??)
  57. 57. Getting good experts
  58. 58. Building consensus with experts
  59. 59. User Testing
  60. 60. People whose characteristics (or profiles) match those of the Web site’s target audience perform a sequence of typical tasks using the site.
  61. 61. Examines:
  62. 62. Ease of learning
  63. 63. Speed of task performance
  64. 64. Error rates
  65. 65. User satisfaction
  66. 66. User retention over time
  67. 67. Elements of User Testing
  68. 68. Define target users
  69. 69. Have users perform representative tasks
  70. 70. Observe users
  71. 71. Report results
  72. 72. Why Multiple Evaluators?
  73. 73. Single evaluator achieves poor results
  74. 74. Only finds about 35% of usability problems
  75. 75. 5 evaluators find more than 75%
  76. 76. Reporting User Testing
  77. 77. Overall goals/objectives
  78. 78. Methodology
  79. 79. Target profile
  80. 80. Testing outline with test script
  81. 81. Specific task list to perform
  82. 82. Data analysis & results
  83. 83. Recommendations
  84. 84. Recent Methods for User Testing
  85. 85. CrazyEgg
  86. 86. Userfly
  87. 87. fivesecondtest
  88. 88. 10 Second Usability Test
  89. 89. Disable stylesheets
  90. 90. Check for the following: Semantic markup, logical organization & only images related to content appear</li></ul>Alpha, Beta & Field Testing<br /><ul><li>Akin to prototyping</li></ul>References & Acknolwedgements<br />American Society for Training & Development. (2009). The value of evaluation: Making training evaluations more effective. Author. <br />Nielsen, J. (2000, March 19). Why you only need to test with 5 users. Jakob Nielsen’s Alertbox. Retrieved from http://www.useit.com/alertbox/20000319.html<br />Reeves, T.C. (2004, December 9). Design research for advancing the integration of digital technologies into teaching and learning: Developing and evaluating educational interventions. Paper presented to the Columbia Center for New Media Teaching and Learning, New York, NY. Available at http://ccnmtl.columbia.edu/seminars/reeves/CCNMTLFormative.ppt<br />Reeves, T.C. & Hedberg, J.C. (2003). Interactive learning systems evaluation. Englewood Cliffs, NJ: Educational Technology Publications.<br />

×