3. WHAT IS EVIDENCE-BASED PRACTICE?
Definition: the idea that
instructional techniques should
be based on research findings and
and research-based theory.
o Designing a course by looking at
at what the research and
evidence says instead of
personal ideologies, fads,
opinions, politics, etc.
3
4. WHAT IS EVIDENCE-
BASED PRACTICE?
“No one would think of
getting to the moon or of
wiping out a disease
without research.
Likewise, one cannot
expect reform efforts in
education to have
significant effects without
research-based knowledge
to guide them.”
~ Shavelson & Towne (2002, 9.1)
4
5. SUMMARY: EVIDENCE-BASED PRACTICE
IS…
looking at what the
preponderance of evidence has
has to say about a particular
instructional feature to help
make decisions about how to
design e-Learning.
IS NOT…
reading research studies
expecting them to tell you
exactly what to do.
5
6. HOW TO RECOGNIZE HIGH-QUALITY RESEARCH:
6
Ask
What
,
Whe
n
and
How
7. WHICH METHOD IS BEST?
- There is not one best research
method, you could utilize multiple
research methods as part of your
research because different methods
can be helpful in addressing
different questions.
- Overall what makes a research
useful is that it is appropriate
research question.
“The simple
truth is that
the method
used must fit
the question
asked.”
~ Shavelson & Towne
(2002 p. 63)
7
8. WHAT TO LOOK FOR IN EXPERIMENTAL COMPARISONS
Focus on…
1) situations that are like yours.
2) studies that use the appropriate research method.
3) experimental comparisons that meet the criteria of
good research methodology. This means they have:
Experimental control
Random assignment
Appropriate measures
8
9. WHAT IS “EXPERIMENTAL CONTROL”?
The experimental
group and the
control should
receive identical
treatments except
for one feature
(i.e. the
instructional
treatment)
9
10. WHAT IS “RANDOM ASSIGNMENT”?
Learners are
randomly
assigned to
groups (or
treatment
conditions).
10
11. WHAT ARE “APPROPRIATE MEASURES”?
The research
report tells you
the mean (M),
standard
deviation (SD)
and sample size
(n) for each group
on a relevant
measure of
learning. 11
12. HOW TO INTERPRET “NO EFFECT” IN EXPERIMENTAL COMPARISONS
At the end of an
experiment, if you
see there is no
difference between
the treatment group
and the control
group, here are six
possible reasons to
consider:
12
13. #1 -- STATISTICAL SIGNIFICANCE: PROBABILITY LESS THAN 0.05
- If you concluded there is a difference in
performance between two groups (i.e. test
performance), the measure of probability is
expected to show that there is less than a 5% chance
you are wrong, and more than a 95% chance that
you are right.
- In general, when the probability is less than 0.05
(p<0.05) researchers conclude that the difference is
real, that is, “statistically significant”.13
14. #2 -- EFFECT SIZE GREATER THAN 0.5
- When you take the difference in means between two groups,
and divide by the standard deviation of the control group (or
of both groups pooled together), this tells you how many
more standard deviations one group is compared with the
other, and is called effect size (ES).
Example: ES = 1 Means if someone not in the control group had access to the
strategy or tool, they would see a 1 standard deviation increase in their performance
or score.
- ES >0.8 strong effect
- ES = 0.5 moderate effect
- ES < 0.2 weak effect
14
Be interested in effect sizes greater
than 0.5, that is, instructional
methods that have been shown to
boost learning scores by more than
half of a standard deviation.
16. 5 QUESTIONS THAT CAN HELP YOU IDENTIFY RELEVANT RESEARCH:
1) How similar are the learners in the research study to your learners? Example –
Research conducted on children may be limited in its applicability to adult populations.
2) Are the conclusions based on an experimental research design? Look for subjects
randomly assigned to test and control groups.
3) Are the experimental results replicated? Look for reports of research in which conclusions
are drawn from a number of studies that essentially replicate the results.
4) Is learning measured by tests that measure application? Research that measures
outcomes with recall tests may not apply to workforce learning goals in which the learning outcomes
must be application, not recall, of new knowledge and skills.
5) Does the data analysis reflect practical significance as well as statistical
significance? With a large sample size, even small learning differences may have statistical
significance, yet may not justify the expense of implementing the test method. Look for statistical
significance of .05 or less and effect size of .5 or more.
16