A summary of chapter three, Evidence-Based Practice, in e-Learning and the Science of Instruction: Proven Guidelines for Consumers and Designers of Multimedia Learning by Ruth C. Clark and Richard E. Mayer.
2. What is Evidence-Based Practice?
Instructional techniques that are based on high-quality research findings and
research-based theory.
Design
Decisions
Evidence
Common
Sense
Ideology
Fads
Opinions
Politics
4. Approaches to Research on Instructional Effectiveness
What Works?
Does an instructional method cause
learning?
Experimental
When does it work?
Does an instructional method work better for
certain learners, materials, or environments?
Factorial
Experiments
How does it work?
What learning processes determine the
effectiveness of an instructional method?
Observational
Studies
Research Question Example Research Method
5. Experimental Comparisons - What to look for?
1. Select research that is similar to yours -
instructional methods, learners, materials,
and learning environments.
2. Select research that uses the appropriate
research method.
3. Select research that meets good
experimental research methodology criteria:
a. Experimental Control - overall identical treatments except
for one feature.
b. Random Assignment - randomly assigned to groups.
c. Appropriate Measures - tells the M, SD, and n for each
learning group.
6. Interpreting No Effect in Experimental
Comparisons
1. Ineffective treatment - instructional treatment did not influence learning.
2. Inadequate sample size - insufficient number of learners.
3. Insensitive measure - learning measure was not sensitive enough to detect
learning difference.
4. Inadequate treatment implementation - treatment and control groups were
very similar.
5. Insensitivity to learners - easy learning materials and insensitivity treatment.
6. Confounding variables - another important variable shows
differences between treatment and controlled groups.
7. Interpreting Research Statistics
1. Find the averages for each group.
2. Standard deviation tells how much
variation there is between the
groups.
3. Powerful instructional methods
should yield high averages and low
standard deviations.
Figure 3.5. Computing Effects Size for the Differences Between
Mean
Test Scores on Two Lessons.
8. Interpreting Research Statistics
4. Statistical Significance - probability
less than .05.
a. There is a difference between the groups.
5. Practical Significance - effect size
(ES) greater than .5.
a. ES = 1 - strong effect.
b. ES = .8 or higher - larger effect.
c. ES = .5 - moderate effect.
d. ES < .2 - too small effect to worry about.
Figure 3.5. Computing Effects Size for the Differences Between
Mean
Test Scores on Two Lessons.
9. How to Identifying Relevant Research?
Consider the following when selecting research
studies:
1. How similar are the learners in the research study
to your learners?
2. Are the conclusions based on an experimental
research design?
3. Are the experimental results replicated?
4. Is learning measured by tests that measure
application?
5. Does the data analysis reflect practical
significances as well as statistical significances?
10. Evidence-Based Practice - What We Don’t Know
1. Additional research on
instructional methods is needed
to create meta-analysis.
2. Limited research to prove or
disprove which method works.