Meta-Analysis of Interaction in Distance Education
Interactions in Distance Education 1
A Meta-Analysis of Three Types of
Interaction Treatments in Distance
A Review of the Literature
Ed. D. in Distance Education Student
Professor Patrick Fahy
EDDE 801, Athabasca University
October 22, 2009
Interactions in Distance Education 2
A Meta-Analysis of Three Types of Interaction Treatments in Distance Education
A Review of the Literature
The authors, professors and doctoral students at Concordia University, Canada, used meta-
analysis to combine the results of 74 empirical studies completed between 1985 and 2006 in order to find
how three types of interaction treatments (ITs), student-instructor interaction (ST), student-student
interaction (SS), and student-content interaction (SC), defined by Moore (1989) were associated with
learning outcomes and how the strength of treatments influenced learning in distance education (DE).
Different from the many comparative discussions regarding the similarities or differences
between distance education (DE) and classroom instruction (CI) (Bernard, Abrami, Lou, Borokhovski,
Wade, Wozney, & et al., 2004; Russell, 1999 as cited in this article; Smith & Dillon, 1999), this article
focused on comparisons of DE versus DE. The researchers expected to find,
• The effect of three types of ITs in previous DE research studies in relation to achievement
• The different affects in achievement when different combinations of ITs were used in DE as
stated in Anderson’s Equivalency Theorem (2003)
• The effects of ITs in asynchronous and non-asynchronous DE forms.
Various definitions of interactions were discussed in this article ranging from human-human
interaction to content-driven interaction, as well as from instructional interaction to social interaction.
Meta-Analysis and Effect Size
A meta-analysis uses statistical methods to combine the results of several studies that address a
set of related research hypotheses. Main purpose for conducting a meta-analysis is to increase statistical
power in studies with small sample sizes. By analyzing the results from a larger number of related studies,
results of a well-performed meta-analysis can be more accurate in term of data analysis. (Wikipedia, 2009)
A critical step of a meta-analysis is to identify a common measure of effect size among the
selected studies. In statistics, effect size is the strength of the correlation between two variables in a
statistical population (Wikipedia, 2009). For example, how strong is a particular interaction treatment
related to the achievement of a certain group of learners in certain learning setting (e.g.,
Interactions in Distance Education 3
synchronous/asynchronous, self-pace/collaborative learning, etc.)
Data sources of the studies used in this meta-analysis were publicly available databases (e.g.,
EBSCO, ProQuest, and ERIC), relevant journals (e.g., American Journal of Distance Education, Canadian
Journal of Learning and Technology, and Distance Education), earlier reviews (e.g., Russell, 1999 and
Bernard & et al., 2004), and web search results. The researchers found more than 6000 abstracts from the
comprehensive search. After judging, 1034 full texts were retrieved. Then, each of the full-text
manuscripts was rated using the inclusive and exclusive criteria predefined. They finally left 74 studies
Figure 1: Methodology of this meta-analysis
In order to categorize and code the studies, the researchers took several steps during the research
including determining only one most prevalent interaction type (ST, SS, or SC) to each study, extracting
effect size from original studies, and identifying the experimental and control group for each study. After
the studies were categorized, the researchers evaluated and coded the interaction strength of each IT and
pairs of ITs. They also coded the cumulative strength of the three ITs and other study features such as
demographics, DE mode (synchronous, asynchronous, and mixed), and methodological quality.
Coded data were entered into Comprehensive Meta-Analysis 2.0 for the main analyses.
Interactions in Distance Education 4
Following are the results of this meta-analysis:
1. What are the effects of the three kinds of interaction (SS, ST, and SC) on achievement?
ST ITs were less effective or provided fewer added values than SS or SC ITs. The possible
reason for that was ST ITs are more difficult to implement consistently.
2. Does more overall IT strength promote better achievement?
Increasing the strength of ITs affects achievement outcomes.
3. Do increases in treatment strength of any of the three different forms of interaction result in
better levels of achievement?
Only SC ITs showed significant and positive affection on achievement outcomes in DE.
4. Which combinations of SS, ST, and SC interaction most affect achievement?
Comparing the affection of the paired combinations of SS, ST, and SC ITs, both combinations
of SS + SC and ST + SC produced significant results in student achievement but not SS + ST.
5. Are there differences among synchronous, asynchronous, and mixed forms of DE in terms of
No significant difference between synchronous, asynchronous, and mixed forms of DE came
out from the results of the study. The authors noted, “The “best of both worlds” prediction for
mixed courses was not borne out statistically in these results” (p. 1261). Since the number of
effect size was small (49 studies remained in this comparative analysis among mixed patters
of DE when unlike comparisons were removed), this result needs to be further studied.
6. What is the relationship between treatment strength and effect size for achievement outcomes
in asynchronous-only DE studies?
SC strength showed a significant relationship with the achievement in asynchronous-only DE
The literature review exemplified the advantages that meta-analysis can bring for educational
research that has been criticized for the generalizability of small scale sampling and poor methodological
Interactions in Distance Education 5
quality. The evidence is clear regarding the capability of meta-analysis in integrating and reviewing the
results of several independent studies.
The authors noted that the results of this meta-analysis should be taken as relative rather than
absolute terms when arguing for one over another. Two suggestions were provided:
• Course designers need to strongly associate ITs with core content, take advantage of
emerging technologies to implement stronger ITs, increase not only the quantity but also the
quality of ITs, and find the balance between interaction provided and the “human and
technical resources needed for cost-effectiveness” (p. 1266)
• New DE studies are needed to investigate how the interactivity underlying such treatments
A dangling point one might like to notice is that while this study found “strong support for
Anderson’s (2003) hypothesis” (p. 1265), another study regarding interaction equivalency by Rhode
(2009) found that not all forms of interaction are equally valued by learners or effective due to learner
Anderson, T. (2003). Getting the Mix Right Again: An Updated and Theoretical Rationale for Interaction.
The International Review of Research in Open and Distance Learning, 4(2). Retrieved August
24, 2009, from http://www.irrodl.org/index.php/irrodl/article/view/149/230
Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L., et al. (2004). How does
distance education compare with classroom instruction? A meta-analysis of the empirical
literature. Review of Educational Research, 74(3), 379-439. Retrieved from http://0-
Moore, M. G. (1989). Editorial: Three Types of Interaction. The American Journal of Distance Education,
3(2), 1-6. Retrieved from http://aris.teluq.uquebec.ca/Portals/598/t3_moore1989.pdf
Interactions in Distance Education 6
Rhode, J. F. (2009). Interaction equivalency in self-paced online learning environments: Exploration of
learner preference. International Review of Research in Open and Distance Learning, 10(1).
Retrieved from http://www.irrodl.org/index.php/irrodl/article/viewFile/603/1179
Russell, T. L. (1999). The No Significant Difference Phenomenon. Chapel Hill: Office of Instructional
Telecommunications, North Carolina State University.
Smith, P. L., & Dillon, C. L. (1999). Comparing Distance Learning and Classroom Learning: Conceptual
Considerations. American Journal of Distance Education, 13(2), 6-23.
Egger, &., & Smith, G. D. (1997). Meta-Analysis. Potentials and promise. BMJ (Clinical Research Ed.),
315(7119), 1371-1374. Retrieved November 3, 2009, from
Figure 2 Detailed research process for this meta-analysis