1. A PROPOSED THEORETICAL MODEL FOR
EVALUATING E-LEARNING
Brenda Mallinson, Norman Nyawo
Rhodes University
ABSTRACT
The deployment of e-learning offers an opportunity to build the skills required for the 21st century knowledge-based
economy. It is important to be able to evaluate various e-learning systems and analyse their efficacy. The focus of this
paper is to investigate the area of e-learning evaluation in order to discover or formulate a framework or model that
would assist the successful evaluation of e-learning in Higher Education Institutions (HEIs). The manner in which
organisations currently implement e-learning evaluation is investigated. This paper critically assesses four current models
and determines how applicable they are to HEIs. Finally, the various perspectives are synthesised and inform the creation
of a new theoretical model for the implementation of successful e-learning evaluation. The proposed model attempts to
address the identified shortcomings, and is suggested for use as a guideline for evaluating e-learning in HEIs.
KEYWORDS
Evaluation, E-Learning, Higher Education Institutions
1. INTRODUCTION
Rosenberg (2006) redefines e-learning as āthe use of Internet technologies to create and deliver a rich
learning environment that includes a broad array of instruction and information resources and solutions, the
goal of which is to enhance individual and organizational performanceā. Active learning strategies placing
the student at the heart of the education process can now be supported by a range of media deployed by an
HEI. Mostert and Hodgkinson-Williams (2006) report that the high level of hardware/software availability,
together with pervasive Internet access, are reflected in the growing prevalence of e-learning in HEIs.
An important goal of e-learning is that it should be equivalent to or better than learning provided by
conventional methods such as classroom-based instruction (Leung, 2003) and, as such, justify the return on
investment (ROI). Although there has been a significant increase in the use of e-learning in mainstream
education, very little research has been conducted to justify its use (Aivazidis et al., 2006), and the evaluation
of e-learning solutions is only partially resolved (Voigt and Swatman, 2004). HEIs considering the use of e-
learning are increasingly aware of the need for quality in both the development and implementation of their
online solutions, and evaluation of these systems will promote quality maintenance.
This study investigates how e-learning is or should be evaluated in HEIs in order to ascertain whether
their various e-learning technologies are providing them with a positive ROI. Current research on e-learning
evaluation, the purpose of evaluation, the motivation for evaluating e-learning systems, and the reasons why
some institutions may not want to evaluate their systems are investigated. Existing evaluation models are
examined, and an approach to e-learning evaluation that is designed to deal with all stages of the e-learning
cycle is shown. Finally, a new theoretical model is proposed to promote the effective evaluation of e-
learning. It is suggested that e-learning takes place in a social context and therefore any evaluation methods
and their impact on outcomes should take the surrounding constraints into consideration.
IADIS International Conference e-Learning 2008
411
3. ā¢ Evaluation is expensive and difficult: some organisations may lack the proper budget, skills and time
to evaluate their e-learning systems effectively (Horton, 2001). For smaller organisations or
institutions, evaluation may result in budget and time over-runs.
ā¢ Evaluation is political: the notion of evaluation often results in personnel feeling some discomfort
and even organisational paranoia. Instructors that use the traditional methods of teaching may feel
threatened if evaluation compares their methods to e-learning systems (Horton, 2001).
ā¢ Credibility of e-learning: the launch of questionable e-learning courseware combined with some less
successful e-learning implementations has bruised the image of e-learning and critics use this to
discredit the necessity of evaluation (Van Dam, 2004).
It was found that there is no single model that can be used when it comes to the evaluation of e-learning
in Higher Education Institutions. The following models have been adapted by various authors in an attempt
to formulate a suitable e-learning evaluation model. The two main schools of thought are one that follows the
traditional Kirkpatrick inspired views and another that follows a systematic approach to e-learning.
3. MODIFIED KIRKPATRICK MODELS
Many professionals turn to Kirkpatrickās model comprising four ordered structured levels because it has
become an industry standard for evaluation. Most evaluations take a layered approach using the basic model
of: Level 1 ā Response (Reaction); Level 2 ā Learning; Level 3 ā Performance (Behaviour); and Level 4 -
Results (Horton, 2001). The first model examined is Van Damās (2004) expanded Kirkpatrick model, with
two new levels inserted: Level 0 (Participation) was added as e-learning participation has evolved and
become an important factor in evaluation. Participation can be measured by counting the number of hits on
the website, downloads, live plays, orders, unique users, live e-learning attendance and overall usage; and the
additional Level 3 (Job application), which is related to Level 0 (Participation).
Table 1. Van Damās modified Kirkpatrick evaluation model (Van Dam, 2004)
Level Name Description
Level 0 Participation This focuses on the level of participation and interaction with the application.
Level 1 Response
(Reaction)
Was the course liked by students? Was it completed? This level gauges the learnersā satisfaction
with the training program.
Level 2 Learning Did the students gain any knowledge or skills? This level verifies improvement in skill,
acquisition of knowledge, or positive change in attitude.
Level 3 Job Application Did they use it? This level ascertains whether the acquired skills were later used.
Level 4 Performance
(Behaviour)
Did the course improve student performance? This level determines the impact of training on
behaviour, on-the-job performance and application of learned skill.
Level 5 Results Was there a good ROI for the institution? This level ascertains whether the training program
achieved or impacted desired end-results.
The second model is the result of Beal (2007) proposing that the ADDIE model (Analysis, Design,
Development, Implementation, and Evaluation) be used in conjunction with Kirkpatrickās model. The most
widely used methodology for developing new education and training programs is called Instructional
Systems Design (ISD). This approach provides a step-by-step system for the evaluation of students' needs,
the design and development of training materials, and the evaluation of the effectiveness of the training
intervention. Almost all ISD models are based on the generic ADDIE model (Beal 2007). Each step has an
outcome that feeds the subsequent step and the five phases represent a dynamic, flexible guideline for
building effective training and performance support tools. Usually evaluation design for e-learning only takes
place at the end of the development process when ideally it should take place at the beginning. The
Evaluatorās Project Report Summary (Table 2), which integrates the ADDIE model with Kirkpatrickās
model, is a projection of how useful variant evaluation can be and that it should not be left to the āStepsā
suggested by Kirkpatrick. The most important link phase is the Evaluation phase that focuses on how well
participants have mastered the learning content and the effectiveness of the training programme or
application.
IADIS International Conference e-Learning 2008
413
5. The fourth model is Voigt and Swatmanās (2004) suggested use of Frickeās model, which includes nine
evaluation forms that consider a variety of prescriptive and descriptive research questions. Frickeās model is
designed to deal with both the stages of the e-learning system life cycle and a variety of learning
environments (Voigt and Swatman, 2004). This model also emphasises the importance of context when
evaluating e-learning systems. E-learning in a social context is an open system, in that the system influences
the environment and vice versa, making it vulnerable to a number of external/internal contextual forces.
It is clear that any context-situated learning research must first define what should be evaluated and where
context comes into play. Frickeās model established a popular framework for the design and evaluation of
multimedia-based instruction. Fricke identified the five evaluation categories: Instructional conditions;
Instructional outcomes; Instructional methods; Assumptions; and General considerations. The two last
categories, in particular, help to integrate contextual information into evaluation design: āAssumptionsā help
to clarify norms and values underlying the evaluation design and 'General conditions' describe the non-
scientific nature of evaluations (Voigt and Swatman, 2004). The model suggests that evaluation be seen as an
ongoing process in the quest for transparency and better decision quality (Table 4).
Table 4. Frickeās Evaluation Criteria: Contextual Variables (Voigt and Swatman, 2004)
C1 Learner's previous knowledge, attitudes & experiences C6 Implicit learning and instructional theories
C2 Content to be learned C7 Explicit learning and instructional theories
C3 Instructional outcomes C8 Priorities of learning outcomes
C4 Instructional methods C9 Financial resources and skills available
C5 Instructional settings C10 Political guidelines
5. CRITICAL ANALYSIS OF THE CURRENT MODELS
Van Damās (2004) adapted Kirkpatrick model is a good summary of the important steps that should be
included in any evaluation process of a training programme or application. The two extra levels added by
Van Dam (2004) make the model more applicable to various contexts. However, the model is caught in the
trap of adhering to Kirkpatrickās generic model, which neglects the idea that systematic approaches are used
to design and develop these e-learning systems; thus it is important to take into consideration the systems
design model (Reeves and Hedberg, 2003). Jochems et al. (2004) highlight that Kirkpatrickās model is partial
and has to be revised conceptually to be applicable, particularly in e-learning environments.
Bealās (2007) integrated model is a good evaluation framework for e-learning systems as it takes into
account the systematic approach to evaluation. Reeves and Hedberg (2003) highlight how important the ISD
approach is for developing, and evaluating education and training programmes. As the model implies some
sort of iteration, it allows for a more thorough evaluation process that can guide evaluators to give a more
detailed, systems approach. This more schematic approach is aligned with current best practice and
educational standards. A disadvantage of this model is that it fails to ask the crucial questions regarding the
experience that the users gained from the training application, or how well the training helped them perform
on the job. These questions could be addressed by the use of Kirkpatrickās guidelines (Jochems et al., 2004).
The most important part of any evaluation model is to query the effectiveness of the evaluation process. This
is not expressed clearly in the ADDIE model, which has been criticised by some as being too systematic: too
linear, too inflexible, too constraining, and even too time-consuming to implement. As an alternative, there
are a variety of systemic design models that emphasise a more holistic, iterative approach to development.
Rather than developing the instruction in phases, the entire development team works together from the start
to rapidly build modules that can be tested with the student audience, and then revised based on their
feedback. Although this approach to development has many advantages when it comes to the creation of e-
learning, there are practical challenges in the management of resources. Frequently, training programmes
must be developed under a fixed and often limited budget and schedule. While it is easy to allocate people
and time to each step in the ADDIE model, it is harder to plan deliverables when there are no distinct steps.
The eLSE model focuses on user testing and obtaining direct user feedback to ascertain whether the
training application is effective. Evaluation patterns are created that can be used repeatedly, standardising the
whole procedure. The breaking down of the evaluation into a systematic inspection and a user-based
IADIS International Conference e-Learning 2008
415
7. Figure 1. Proposed Model for E-Learning Evaluation
In order to understand the new model (Figure 1), it is important to re-examine some misconceptions that
have been made by the current models and comment on how the proposed model directly improves on them.
Misconception 1: Level 4 of Kirkpatrickās model is superior. Kirkpatrickās Levels 1 to 4 measure
different aspects but level 4 is often described as a āhigherā level of evaluation. There is the view that level 4
is the pinnacle of the model as it is concerned with ROI and results.
Misconception 2: Level 3 is difficult to measure. Many measures are not appropriate or not sensitive
enough to detect changes in learnersā behaviours. It is difficult to ask the correct questions and obtain
accurate, truthful responses from people. Human behaviour is generally difficult to measure, thus the
measurement methods are not 100% reliable.
Misconception 3: Evaluation equals effectiveness. This is not necessarily true; the evaluation should focus
on the learning aspect (Level 2) of the subject, while effectiveness focuses on whether training has produced
intended results (Level 3 and 4). Evaluation and effectiveness are linked but they should not necessarily
arranged in continuum as they are in Kirkpatrickās model.
Misconception 4: The waterfall approach is the most suitable method. This approach has its own
disadvantages such as no fair division of phases in the life cycle. Not all the problems related to a phase are
resolved during the same phase; instead all those problems related to one phase are carried on to the next
phase and need to be resolved there. This takes up much of the time of the next phase. The proposed model
thus uses a spiral approach in its systematic evaluation so that it can avoid the problems of carrying over
issues into the next phase by solving the problems in further iterations.
Misconception 5: The external variables to the evaluation process are not necessary. Most of the
traditional evaluation models overlook the impact of external variables on the evaluation process. The
IADIS International Conference e-Learning 2008
417