Notes on Evaluation of eLearningDocument Transcript
Notes on the Evaluation of eLearning<br />Michael M. Grant, PhD<br />Kirkpatrick (& Phillips) Model<br />
Level 1: Reaction — participants’ reactions to the instruction
Level 2: Learning — the degree to which learning occurs as a result of the instruction
Level 3: Transfers — the transfer of learning to impact on job behavior
Level 4: Organizational performance — the impact the learning has on the organization
Level 5: ROI — the investment of the training compared to its relative benefits to the organization and/or productivity/revenue
Formative Evaluation<br />
What’s the purpose?
A focus on improvement during development.
Level 2 Evaluations
Appeal (Level 1: Reaction)
Data Collection Matrix
Methods1. What are the logistical requirements?2. What are user reactions?3. What are trainer reactions?4. What are expert reactions?5. What corrections must be made?6. What enhancements can be made?Anecdotal recordsXXXXXUser questionnairesXXXXUser interviewsXXXXUser focus groupsXXXUsability observationsXXXXOnline data collectionXXExpert reviewsXXX<br />
“Vote early and often.”
The sooner formative evaluation is conducted during development, the more likely that substantive improvements will be made and costly errors avoided (Reeves & Hedberg, 2003, p. 142).
“Experts are anyone with specialized knowledge that is relevant to the design of your ILE” (Reeves & Hedberg, 2003, p. 145).
SMEs, instructional experts, graphic designers, teachers, and trainers
Interface Review Guidelines from http://it.coe.uga.edu/~treeves/edit8350/UIRF.html
Observations from one-on-ones and small groups
We need additional information about how the elearning product is being used and how it will perform.
What Is Usability?<br />
“The most common user action on a Web site is to flee.” —Edward Tufte
“at least 90% of all commercial Web sites are overly difficult to use….the average outcome of Web usability studies is that test users fail when they try to perform a test task on the Web. Thus, when you try something new on the Web, the expected outcome is failure.” —Jakob Nielsen
Nielsen’s Web Usability Rules
Visibility of system status
Match between system and real world
User control and freedom
Consistency and standards
Recognition rather than recall
Flexibility and efficiency of use
Help users recognize, diagnose, and recover from errors
Help and documentation
Aesthetic and minimalist design
Ease of learning - How fast can a user who has never seen the user interface before learn it sufficiently well to accomplish basic tasks?
Efficiency of use - Once an experienced user has learned to use the system, how fast can he or she accomplish tasks?
Memorability - If a user has used the system before, can he or she remember enough to use it effectively the next time or does the user have to start over again learning everything?
Error frequency and severity - How often do users make errors while using the system, how serious are these errors, and how do users recover from these errors?
Subjective satisfaction - How much does the user like using the system?
Two Major Methods to Evaluate Usability
Heuristic Evaluation Process
Several experts individually compare a product to a set of usability heuristics
Violations of the heuristics are evaluated for their severity and extent suggested solutions
At a group meeting, violation reports are categorized and assigned
average severity ratings, extents, heuristics violated, description of opportunity for improvement
Advantages of Heuristic Evaluation
Quick: Do not need to find or schedule users
Easy to review problem areas many times
Inexpensive: No fancy equipment
Disadvantages of Heuristic Evaluation
Validity: No users involved
Finds fewer problems (40-60% less??)
Getting good experts
Building consensus with experts
People whose characteristics (or profiles) match those of the Web site’s target audience perform a sequence of typical tasks using the site.
Ease of learning
Speed of task performance
User retention over time
Elements of User Testing
Define target users
Have users perform representative tasks
Why Multiple Evaluators?
Single evaluator achieves poor results
Only finds about 35% of usability problems
5 evaluators find more than 75%
Reporting User Testing
Testing outline with test script
Specific task list to perform
Data analysis & results
Recent Methods for User Testing
10 Second Usability Test
Check for the following: Semantic markup, logical organization & only images related to content appear
Alpha, Beta & Field Testing<br />
Akin to prototyping
References & Acknolwedgements<br />American Society for Training & Development. (2009). The value of evaluation: Making training evaluations more effective. Author. <br />Nielsen, J. (2000, March 19). Why you only need to test with 5 users. Jakob Nielsen’s Alertbox. Retrieved from http://www.useit.com/alertbox/20000319.html<br />Reeves, T.C. (2004, December 9). Design research for advancing the integration of digital technologies into teaching and learning: Developing and evaluating educational interventions. Paper presented to the Columbia Center for New Media Teaching and Learning, New York, NY. Available at http://ccnmtl.columbia.edu/seminars/reeves/CCNMTLFormative.ppt<br />Reeves, T.C. & Hedberg, J.C. (2003). Interactive learning systems evaluation. Englewood Cliffs, NJ: Educational Technology Publications.<br />