Evaluation of eLearning


Published on

Moving beyond level 1 and level 2 evaluations. Using usability methods to improve elearning.

Published in: Education, Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Kirkpatrick and Phillips modelLevel 1: Reaction — participants’ reactions to the instructionLevel 2: Learning — the degree to which learning occurs as a result of the instructionLevel 3: Transfers — the transfer of learning to impact on job behaviorLevel 4: Organizational performance — the impact the learning has on the organizationLevel 5: ROI — the investment of the training compared to its relative benefits to the organization and/or productivity/revenue
  • Effectiveness, efficiency and appeal. Carry out expert review, user observations and usability testing.Purpose? Provide information to guide decisions about creating, debugging and enhancing the ILE at various stages of development.
  • Decisions & QuestionsShould the interface be redesigned?Is the navigation clear to users?Are the meanings of icons clear?Do users get lost in navigating through the program?Should the number and length of video segments be decreased?Do user select to view video segments?Do users use video replay options?Do users rate video highly?Should more practice opportunities be added?Do user pass module quizzes?Do users achieve master on unit tests?Do users rate practice highly?Should the program scope be expanded?Are the materials coordinated with curricular guidelines?Do content experts rate the program as comprehensive?
  • Image fromNational Photo Company Collection (Library of Congress) at http://hdl.loc.gov/loc.pnp/cph.3b23344
  • The concern with designing software applications that people find easy to use and personally empowering.Usable computer programs are logical, intuitive and clear to the people who use them.(Reeves, 2004)
  • From U.S. Government Web site managed by the U.S. Department of Health & Human Services athttp://www.usability.gov/basics/index.html
  • Delicious usability lab
  • Why not more evaluators?
  • Userfly, lets you record your site’s users' actions and then play them back in your own browser. You are able to see exactly what people are doing, which includes mouse clicks, keystrokes (except for those in password fields for obvious security reasons), page scrolling and navigation across multiple pages. Everything happens as if you’re browsing the site yourself, except the actions you see follow another user’s recorded session.(Dmitry Fadeyev,March 9, 2009 at http://www.usabilitypost.com/2009/03/09/userfly-review/)
  • Google Website Optimizer (GWO) is a free website testing and optimization tool that lets you test and optimize site content and design. Upload a few variations of a web page and GWO will serve them in an alternating pattern to your visitors. Behind the scenes, GWO’s reporting tools are monitoring which combinations lead to the highest conversion rates. At the end of the test, you’ll have a definitive result, allowing you to continually optimize your designs and improve site performance.While GWO may not be a purely qualitative tool, it should be one of the most important in your toolbelt. Using some of the tools listed above, you can come to better conclusions about which designs and layouts should work best, but GWO provides a real-life litmus test. GWO takes the guesswork out of design and gives you a virtual laboratory to test your assumptions.(Andrew Follett, Instant Shift, October 8, 2009 at http://www.instantshift.com/2009/10/08/10-qualitative-tools-to-improve-your-website/)
  • Conducting a five second test allows you to find out which parts of your designs are the most prominent. You can do this in two ways: Memory TestYou give users five seconds to look at your design and then ask them to remember specific elements. Click TestYou give users five seconds to locate and click on specific elements of your design. All you need to do is upload a design you want to test and choose the type of test you want to run. You will be given a unique link that you can share with friends and colleagues and have them do your test. Of course, you can always just let our random users do your test for really fast feedback! (Angry Monkeys Pty. Ltd, Retrieved March 18, 2010 from http://www.fivesecondtest.com/about)
  • Jason Garrison, November 21, 2009, from http://www.fivefingercoding.com/xhtml-and-css/website-usability-testingSemantic markupThe content should be understandable even without style information. Tags should be chosen based on the meaning of the content, not how they render in the browser. For instance, site navigation is really a list of links so an appropriate way to mark them up is as an unordered list.Logical organizationThe structure of the page should make sense apart from the styled layout and be arranged in an understandable hierarchy. Ideally you should use source ordered content.  The main heading would appear first followed by the most important content, which is how you’d expect a page to be laid out.The only images are those with text or some relation to the contentIf somebody read a web page to you, they probably wouldn’t talk about the tiled background or the email icon next to the contact link. While useful, these elements are decorative rather than structural. Once styles are disabled, the only images that should be left are those with text (preferably with alt attributes so they’re accessible) and those that are integral to the content.
  • Evaluation of eLearning

    1. 1. Evaluation of eLearning<br />Michael M. Grant, PhD<br />Michael M. Grant 2010<br />
    2. 2.
    3. 3. Kirkpatrick’s Levels<br />the investment of the training compared<br />to its relative benefits to the organization<br />and/or productivity/revenue<br />Level 5:ROI<br />2.1%<br />7.6%<br />22.9%<br />53.9%<br />91.3%<br />(ASTD, 2005)<br />
    4. 4. Kirkpatrick (& Phillips) Model<br />17.9%<br />92%<br />(ASTD, 2009)<br />
    5. 5. Formative Evaluation<br />What’s the purpose?<br />
    6. 6. A focus on improvement during development.<br />
    7. 7. Level 2 Evaluations<br />Appeal<br />Effectiveness<br />Efficiency<br />
    8. 8. Data Collection Matrix<br />
    9. 9. “Vote early and often.”<br />The sooner formative evaluation is conducted during development, the more likely that substantive improvements will be made and costly errors avoided. (Reeves & Hedberg, 2003, p. 142)<br />
    10. 10.
    11. 11. “Experts are anyone with specialized knowledge that is relevant to the design of your ILE.”<br />(Reeves & Hedberg, 2003, p. 145)<br />
    12. 12. Expert Review<br />
    13. 13. Interface Review Guidelines<br />from http://it.coe.uga.edu/~treeves/edit8350/UIRF.html<br />
    14. 14. User Review<br />Observations from one-on-ones and small groups<br />
    15. 15. What Is Usability?<br />
    16. 16. The most common user action on a Web site is to flee.”<br />— Edward Tufte<br />
    17. 17. “at least 90% of all commercial Web sites are overly difficult to use….the average outcome of Web usability studies is that test users fail when they try to perform a test task on the Web. Thus, when you try something new on the Web, the expected outcome is failure.”<br />— Jakob Nielsen<br />
    18. 18. Nielsen’s Web Usability Rules<br />Visibility of system status<br />Match between system and real world<br />User control and freedom<br />Consistency and standards<br />Error prevention<br />Recognition rather than recall<br />Flexibility and efficiency of use<br />Help users recognize, diagnose, and recover from errors<br />Help and documentation<br />Aesthetic and minimalist design<br />
    19. 19. Ease of learning - How fast can a user who has never seen the user interface before learn it sufficiently well to accomplish basic tasks?<br />Efficiency of use - Once an experienced user has learned to use the system, how fast can he or she accomplish tasks?<br />Memorability - If a user has used the system before, can he or she remember enough to use it effectively the next time or does the user have to start over again learning everything?<br />Error frequency and severity - How often do users make errors while using the system, how serious are these errors, and how do users recover from these errors?<br />Subjective satisfaction - How much does the user like using the system?<br />
    20. 20. Two Major Methods to Evaluate Usability<br />
    21. 21. Heuristic Evaluation Process<br />Several experts individually compare a product to a set of usability heuristics<br />Violations of the heuristics are evaluated for their severity and extent suggested solutions<br />At a group meeting, violation reports are categorized and assigned<br />average severity ratings, extents, heuristics violated, description of opportunity for improvement<br />
    22. 22. Heuristic Evaluation Comparisons<br />Advantages<br />Quick: Do not need to find or schedule users<br />Easy to review problem areas many times<br />Inexpensive: No fancy equipment<br />Disadvantages<br />Validity: No users involved<br />Finds fewer problems (40-60% less??)<br />Getting good experts<br />Building consensus with experts<br />
    23. 23. Heuristic Evaluation Report<br />
    24. 24. Heuristic Evaluation Report<br />
    25. 25. User TEsting<br />
    26. 26. User Testing<br />People whose characteristics (or profiles) match those of the Web site’s target audience perform a sequence of typical tasks using the site.<br />Examines:<br />Ease of learning<br />Speed of task performance<br />Error rates<br />User satisfaction<br />User retention over time<br />
    27. 27. Image from (nz)dave at http://www.flickr.com/photos/nzdave/491411546/<br />
    28. 28. Elements of User Testing<br />Define target users<br />Have users perform representative tasks<br />Observe users<br />Report results<br />
    29. 29. Why Multiple Evaluators?<br />Single evaluator achieves poor results<br />Only finds about 35% of usability problems<br />5 evaluators find more than 75%<br />
    30. 30. Why only 5 Users?<br />(Nielsen, 2000)<br />
    31. 31. Reporting User Testing<br />Overall goals/objectives<br />Methodology<br />Target profile<br />Testing outline with test script<br />Specific task list to perform<br />Data analysis & results<br />Recommendations<br />
    32. 32. Recent Methods for User Testing<br />
    33. 33.
    34. 34.
    35. 35.
    36. 36.
    37. 37.
    38. 38.
    39. 39.
    40. 40.
    41. 41. 10 Second Usability Test<br />Disable stylesheets<br />Check for the following:<br />Semantic markup<br />Logical organization<br />Only images related to content appear<br />
    42. 42. Alpha, Beta & Field Testing<br />Akin to prototyping<br />
    43. 43. References & Acknolwedgements<br />American Society for Training & Development. (2009). The value of evaluation: Making training evaluations more effective. Author. <br />Follett, A. (2009, October 9). 10 qualitative tools to improve your web site. Instant Shift. Retrieved March 18, 2010 from http://www.instantshift.com/2009/10/08/10-qualitative-tools-to-improve-your-website/<br />Nielsen, J. (2000, March 19). Why you only need to test with 5 users. Jakob Nielsen’s Alertbox. Retrieved from http://www.useit.com/alertbox/20000319.html<br />Reeves, T.C. (2004, December 9). Design research for advancing the integration of digital technologies into teaching and learning: Developing and evaluating educational interventions. Paper presented to the Columbia Center for New Media Teaching and Learning, New York, NY. Available at http://ccnmtl.columbia.edu/seminars/reeves/CCNMTLFormative.ppt<br />Reeves, T.C. & Hedberg, J.C. (2003). Interactive learning systems evaluation. Englewood Cliffs, NJ: Educational Technology Publications.<br />
    44. 44. Michael M. Grant 2010<br />
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.