Evaluation of eLearning

Uploaded on

Moving beyond level 1 and level 2 evaluations. Using usability methods to improve elearning.

Moving beyond level 1 and level 2 evaluations. Using usability methods to improve elearning.

More in: Education , Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads


Total Views
On Slideshare
From Embeds
Number of Embeds



Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

    No notes for slide
  • Kirkpatrick and Phillips modelLevel 1: Reaction — participants’ reactions to the instructionLevel 2: Learning — the degree to which learning occurs as a result of the instructionLevel 3: Transfers — the transfer of learning to impact on job behaviorLevel 4: Organizational performance — the impact the learning has on the organizationLevel 5: ROI — the investment of the training compared to its relative benefits to the organization and/or productivity/revenue
  • Effectiveness, efficiency and appeal. Carry out expert review, user observations and usability testing.Purpose? Provide information to guide decisions about creating, debugging and enhancing the ILE at various stages of development.
  • Decisions & QuestionsShould the interface be redesigned?Is the navigation clear to users?Are the meanings of icons clear?Do users get lost in navigating through the program?Should the number and length of video segments be decreased?Do user select to view video segments?Do users use video replay options?Do users rate video highly?Should more practice opportunities be added?Do user pass module quizzes?Do users achieve master on unit tests?Do users rate practice highly?Should the program scope be expanded?Are the materials coordinated with curricular guidelines?Do content experts rate the program as comprehensive?
  • Image fromNational Photo Company Collection (Library of Congress) at http://hdl.loc.gov/loc.pnp/cph.3b23344
  • The concern with designing software applications that people find easy to use and personally empowering.Usable computer programs are logical, intuitive and clear to the people who use them.(Reeves, 2004)
  • From U.S. Government Web site managed by the U.S. Department of Health & Human Services athttp://www.usability.gov/basics/index.html
  • Delicious usability lab
  • Why not more evaluators?
  • Userfly, lets you record your site’s users' actions and then play them back in your own browser. You are able to see exactly what people are doing, which includes mouse clicks, keystrokes (except for those in password fields for obvious security reasons), page scrolling and navigation across multiple pages. Everything happens as if you’re browsing the site yourself, except the actions you see follow another user’s recorded session.(Dmitry Fadeyev,March 9, 2009 at http://www.usabilitypost.com/2009/03/09/userfly-review/)
  • Google Website Optimizer (GWO) is a free website testing and optimization tool that lets you test and optimize site content and design. Upload a few variations of a web page and GWO will serve them in an alternating pattern to your visitors. Behind the scenes, GWO’s reporting tools are monitoring which combinations lead to the highest conversion rates. At the end of the test, you’ll have a definitive result, allowing you to continually optimize your designs and improve site performance.While GWO may not be a purely qualitative tool, it should be one of the most important in your toolbelt. Using some of the tools listed above, you can come to better conclusions about which designs and layouts should work best, but GWO provides a real-life litmus test. GWO takes the guesswork out of design and gives you a virtual laboratory to test your assumptions.(Andrew Follett, Instant Shift, October 8, 2009 at http://www.instantshift.com/2009/10/08/10-qualitative-tools-to-improve-your-website/)
  • Conducting a five second test allows you to find out which parts of your designs are the most prominent. You can do this in two ways: Memory TestYou give users five seconds to look at your design and then ask them to remember specific elements. Click TestYou give users five seconds to locate and click on specific elements of your design. All you need to do is upload a design you want to test and choose the type of test you want to run. You will be given a unique link that you can share with friends and colleagues and have them do your test. Of course, you can always just let our random users do your test for really fast feedback! (Angry Monkeys Pty. Ltd, Retrieved March 18, 2010 from http://www.fivesecondtest.com/about)
  • Jason Garrison, November 21, 2009, from http://www.fivefingercoding.com/xhtml-and-css/website-usability-testingSemantic markupThe content should be understandable even without style information. Tags should be chosen based on the meaning of the content, not how they render in the browser. For instance, site navigation is really a list of links so an appropriate way to mark them up is as an unordered list.Logical organizationThe structure of the page should make sense apart from the styled layout and be arranged in an understandable hierarchy. Ideally you should use source ordered content.  The main heading would appear first followed by the most important content, which is how you’d expect a page to be laid out.The only images are those with text or some relation to the contentIf somebody read a web page to you, they probably wouldn’t talk about the tiled background or the email icon next to the contact link. While useful, these elements are decorative rather than structural. Once styles are disabled, the only images that should be left are those with text (preferably with alt attributes so they’re accessible) and those that are integral to the content.


  • 1. Evaluation of eLearning
    Michael M. Grant, PhD
    Michael M. Grant 2010
  • 2.
  • 3. Kirkpatrick’s Levels
    the investment of the training compared
    to its relative benefits to the organization
    and/or productivity/revenue
    Level 5:ROI
    (ASTD, 2005)
  • 4. Kirkpatrick (& Phillips) Model
    (ASTD, 2009)
  • 5. Formative Evaluation
    What’s the purpose?
  • 6. A focus on improvement during development.
  • 7. Level 2 Evaluations
  • 8. Data Collection Matrix
  • 9. “Vote early and often.”
    The sooner formative evaluation is conducted during development, the more likely that substantive improvements will be made and costly errors avoided. (Reeves & Hedberg, 2003, p. 142)
  • 10.
  • 11. “Experts are anyone with specialized knowledge that is relevant to the design of your ILE.”
    (Reeves & Hedberg, 2003, p. 145)
  • 12. Expert Review
  • 13. Interface Review Guidelines
    from http://it.coe.uga.edu/~treeves/edit8350/UIRF.html
  • 14. User Review
    Observations from one-on-ones and small groups
  • 15. What Is Usability?
  • 16. The most common user action on a Web site is to flee.”
    — Edward Tufte
  • 17. “at least 90% of all commercial Web sites are overly difficult to use….the average outcome of Web usability studies is that test users fail when they try to perform a test task on the Web. Thus, when you try something new on the Web, the expected outcome is failure.”
    — Jakob Nielsen
  • 18. Nielsen’s Web Usability Rules
    Visibility of system status
    Match between system and real world
    User control and freedom
    Consistency and standards
    Error prevention
    Recognition rather than recall
    Flexibility and efficiency of use
    Help users recognize, diagnose, and recover from errors
    Help and documentation
    Aesthetic and minimalist design
  • 19. Ease of learning - How fast can a user who has never seen the user interface before learn it sufficiently well to accomplish basic tasks?
    Efficiency of use - Once an experienced user has learned to use the system, how fast can he or she accomplish tasks?
    Memorability - If a user has used the system before, can he or she remember enough to use it effectively the next time or does the user have to start over again learning everything?
    Error frequency and severity - How often do users make errors while using the system, how serious are these errors, and how do users recover from these errors?
    Subjective satisfaction - How much does the user like using the system?
  • 20. Two Major Methods to Evaluate Usability
  • 21. Heuristic Evaluation Process
    Several experts individually compare a product to a set of usability heuristics
    Violations of the heuristics are evaluated for their severity and extent suggested solutions
    At a group meeting, violation reports are categorized and assigned
    average severity ratings, extents, heuristics violated, description of opportunity for improvement
  • 22. Heuristic Evaluation Comparisons
    Quick: Do not need to find or schedule users
    Easy to review problem areas many times
    Inexpensive: No fancy equipment
    Validity: No users involved
    Finds fewer problems (40-60% less??)
    Getting good experts
    Building consensus with experts
  • 23. Heuristic Evaluation Report
  • 24. Heuristic Evaluation Report
  • 25. User TEsting
  • 26. User Testing
    People whose characteristics (or profiles) match those of the Web site’s target audience perform a sequence of typical tasks using the site.
    Ease of learning
    Speed of task performance
    Error rates
    User satisfaction
    User retention over time
  • 27. Image from (nz)dave at http://www.flickr.com/photos/nzdave/491411546/
  • 28. Elements of User Testing
    Define target users
    Have users perform representative tasks
    Observe users
    Report results
  • 29. Why Multiple Evaluators?
    Single evaluator achieves poor results
    Only finds about 35% of usability problems
    5 evaluators find more than 75%
  • 30. Why only 5 Users?
    (Nielsen, 2000)
  • 31. Reporting User Testing
    Overall goals/objectives
    Target profile
    Testing outline with test script
    Specific task list to perform
    Data analysis & results
  • 32. Recent Methods for User Testing
  • 33.
  • 34.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39.
  • 40.
  • 41. 10 Second Usability Test
    Disable stylesheets
    Check for the following:
    Semantic markup
    Logical organization
    Only images related to content appear
  • 42. Alpha, Beta & Field Testing
    Akin to prototyping
  • 43. References & Acknolwedgements
    American Society for Training & Development. (2009). The value of evaluation: Making training evaluations more effective. Author.
    Follett, A. (2009, October 9). 10 qualitative tools to improve your web site. Instant Shift. Retrieved March 18, 2010 from http://www.instantshift.com/2009/10/08/10-qualitative-tools-to-improve-your-website/
    Nielsen, J. (2000, March 19). Why you only need to test with 5 users. Jakob Nielsen’s Alertbox. Retrieved from http://www.useit.com/alertbox/20000319.html
    Reeves, T.C. (2004, December 9). Design research for advancing the integration of digital technologies into teaching and learning: Developing and evaluating educational interventions. Paper presented to the Columbia Center for New Media Teaching and Learning, New York, NY. Available at http://ccnmtl.columbia.edu/seminars/reeves/CCNMTLFormative.ppt
    Reeves, T.C. & Hedberg, J.C. (2003). Interactive learning systems evaluation. Englewood Cliffs, NJ: Educational Technology Publications.
  • 44. Michael M. Grant 2010