The three categories of experimental designs have been explained, along with some designs. The categories are compared, and when to select the type and design is also included in the presentation. The essential concepts like variable and validity of experiments are also
4. 4
Introduction to Experimental Research
Experimental research is a systematic
and scientific approach to investigating
cause-and-effect relationships by
manipulating one or more variables and
observing the effects on other variables.
7. 7
Variables
Variables are the characteristics or properties
that can be measured, manipulated, or
controlled in an experiment.
e.g. – Age, Gender, Achievement, Awareness,
Skill, Habits, Teaching methods, techniques
etc.
8. 8
Types of Variables
Independent Variable (IV):
Definition: The variable that is manipulated or changed by
the researcher to observe its effect on the dependent
variable.
Example: In an educational study, different teaching
methods (e.g., traditional lecture, active learning) can be the
independent variable.
Dependent Variable (DV):
Definition: The variable that is observed or measured to
assess the effect of the independent variable.
Example: Student performance (grades, test scores) can be
the dependent variable in an educational study comparing
teaching methods.
9. 9
Types of Variables
Extraneous Variable:
Definition: Variables that can affect the outcome of
the experiment but are not the focus of the study.
Purpose: To identify and control for potential
sources of error or bias in the experiment.
Example: Time of day, student motivation, or
environmental factors could be extraneous variables
in an educational study.
10. 10
Types of Variables
Control Variable:
Definition: Variables that are kept constant or
controlled to prevent them from influencing the
results of the experiment.
Purpose: To isolate the effect of the independent
variable on the dependent variable by eliminating
the influence of other variables.
Example: If studying the impact of a new
teaching method, the teacher's experience level
might be controlled to ensure it doesn't affect the
results.
11. 11
Types of Variables
Status variable - A variable in the context of
social sciences and sociology, refers to a
characteristic or attribute of an individual,
group, or organization.
e.g. – knowledge, belief, skill, opinion etc.
14. 14
Internal Validity
Internal validity in educational research refers to
the extent to which an experimental study
accurately measures the relationship between the
independent variable(s) and the dependent
variable(s) without the influence of extraneous
variables. (History, Maturity, Interest, IQ etc.)
In simpler terms, it assesses whether the changes
observed in the dependent variable(s) can be
confidently attributed to the manipulation of the
independent variable(s) and not to other factors.
15. 15
External Validity
External validity in educational research refers to the
extent to which the findings of a study can be
generalized or applied to settings, populations,
times, and measures other than those used in the
study. (Sampling, representation, tool, statistics etc.)
In other words, it assesses whether the results
obtained in a specific experimental context can be
extended to broader situations, including different
people, places, and conditions.
17. 17
Categories of Experimental Designs
Non/Pre-
Experimental
Designs
1. Post Test Only
Design
2. Pre-test and
Post-test Design
3. Static Group
Comparison Design
True
Experimental
Designs
1. Pre-test Post-test
Control Group Design
2. Post-test only
Control Group Design
3. Solomon Four-
group Design
4. Factorial Design
Quasi
Experimental
Designs
1. Time Series
Design
2. Non-equivalent
Design
3. Separate Sample
Pre-test Post-test
Design
18. 18
1. Pre-Experimental Designs
Definition: Making observations and collect data
without implementing specific interventions.
Characteristics:
Limited Control: Pre-experimental designs lack
control over extraneous variables, making it
challenging to establish causality.
No Control Group: These designs usually do not
include a control group, making it difficult to
compare the outcomes against a baseline.
19. 19
2. True Experimental Designs
Definition: It involves the random assignment of
participants into experimental and control
groups, allowing researchers to establish cause-
and-effect relationships.
Characteristics:
Randomization: Participants are randomly
assigned, ensuring that each group is
comparable at the start of the study.
Controlled Variables: Researchers carefully
control extraneous variables, isolating the effect
of the independent variable.
20. 20
3. Quasi-Experimental Design
Definition: Quasi-experimental designs in
educational research share similarities with true
experimental designs but lack complete
randomization. Researchers use existing groups
or conditions, leading to less control than in true
experiments.
Characteristics:
Partial Randomization: Participants are not
entirely randomly assigned, often due to practical
or ethical constraints.
21. 21
3. Quasi-Experimental Design
Controlled Variables: Researchers attempt to control
extraneous variables to the extent possible, but the lack of
full randomization can introduce biases.
22. 22
Comparison
Control and Randomization: True experimental
designs provide the highest level of control and
involve randomization. Quasi-experimental
designs have less control due to partial
randomization, while pre-experimental designs
lack both control and randomization.
23. 23
Comparison
Causality: True experimental designs
allow for strong causal inferences due to
randomization and controlled variables.
Quasi-experimental designs allow for
moderate causal inferences, while pre-
experimental designs offer weak causal
inferences due to limited controls and
lack of randomization.
24. 24
Comparison
Applicability in Education: Pre-experimental
designs might be used in preliminary
observations, true experimental designs are ideal
for establishing causality when feasible, and
quasi-experimental designs are valuable in
educational settings where complete
randomization is challenging due to practical or
ethical reasons.
25. 25
Conclusion
True experimental designs offer the highest
level of control, randomization, and confidence
in establishing cause-and-effect relationships.
Quasi-experimental designs provide a middle
ground, allowing for meaningful insights in
situations where full experimental control is
not possible.
Pre-experimental designs, while useful for
initial observations, are limited in their ability
to establish strong causal relationships due to
their lack of control and randomization.
27. 27
Post Test Only Design
Example: A school district introduces a new online
learning platform for teaching mathematics to middle
school students. After implementing the platform for a
semester, the students' math scores on the standardized
state test are compared to those of students in neighboring
districts without the online platform. The difference in
scores between the two groups is attributed to the new
learning platform.
Statistics: Independent t-test
Advantages: Simplicity, less chance of experimental bias.
Limitations: Lack of pre-treatment baseline data,
potential threats to internal validity.
28. 28
Pre-test and Post-test Design
Example: A researcher investigates the effectiveness of a
reading intervention program for elementary school
students. Before the intervention, students' reading levels
are assessed (pre-test). The intervention is then
implemented over an academic year. After the intervention,
the students' reading levels are assessed again (post-test).
By comparing the pre-test and post-test scores, the
researcher determines the impact of the intervention on
the students' reading abilities.
Statistics: Paired t-test, Analysis of Covariance (ANCOVA)
Advantages: Allows for comparison, helps control for
individual differences, assesses change over time.
Limitations: Time-consuming, potential for testing effects,
cost.
29. 29
Static Group Comparison Design
Example: A study examines the impact of parental
involvement on students' academic performance. Researchers
compare the final exam scores of students whose parents are
actively involved in their education (Group A) with the scores
of students whose parents are less involved (Group B). Since
the groups were not randomly assigned, the study uses a
static group comparison to analyze the differences in
academic performance between the two groups.
Statistics: Independent t-test, Analysis of Variance (ANOVA)
Advantages: Useful when randomization is difficult, allows
for comparison of naturally occurring groups.
Limitations: Lack of control over group differences, potential
biases due to non-random assignment.
30. 30
Selection of Design
Post Test Only Design:
A baseline or pre-treatment measurement is not necessary.
Random assignment to groups is not feasible or practical.
Pre-test and Post-test Design:
Comparing changes over time is essential.
Controlling for individual differences and assessing
individual growth is necessary.
Static Group Comparison Design:
Random assignment is not possible or ethical.
Naturally occurring groups exist, and the researcher wants
to compare their outcomes.
32. 32
Pre-test Post-test Control Group Design
Definition: A true experimental design involving random
assignment of participants into experimental and control
groups, pre-testing both groups, applying the treatment to the
experimental group, and post-testing both groups.
Statistics - Analysis of Covariance (ANCOVA), Paired t-tests
33. 33
Post-test Only Control Group Design
Definition: A true experimental design involving random
assignment of participants into experimental and control
groups, applying the treatment to the experimental group,
and post-testing both groups without a pre-test.
Statistics: Independent t-test, (ANOVA)
34. 34
Solomon Four-group Design
Definition: A true experimental design that combines
elements of Pre-test Post-test Control Group and Post-test
Only Control Group designs. It includes two experimental
groups (one with pre-testing and one without) and two
control groups (one with pre-testing and one without).
Statistics: ANCOVA, Factorial ANOVA
35. 35
Factorial Design
Definition: A true experimental design involving the
simultaneous manipulation of two or more independent
variables to study their individual and interactive effects on
the dependent variable(s).
Statistics: Factorial ANOVA, Post hoc tests: If factorial
ANOVA indicates significant interactions, post hoc tests
like Tukey's HSD or Bonferroni corrections are employed to
determine specific group differences.
36. 36
When to Use Each Design
Pre-test Post-test Control Group Design:
Appropriate When:
Establishing a clear cause-and-effect relationship is
essential.
A baseline measurement (pre-test) is necessary to assess
changes accurately.
Random assignment is possible and ethical.
Considerations:
Useful for interventions where measuring the change from
the initial state is critical.
Allows researchers to assess the effectiveness of the
treatment while controlling for initial differences between
groups.
37. 37
When to Use Each Design
Post-test Only Control Group Design:
Appropriate When:
Random assignment is feasible and ethical.
A pre-test is not necessary due to the nature of the study
or to avoid potential biases.
Considerations:
Suitable for situations where a pre-test might sensitize
participants or introduce experimental biases.
Allows for a simplified study design when baseline data
collection is not practical or poses risks.
38. 38
When to Use Each Design
Solomon Four-group Design:
Appropriate When:
Testing effects are a concern, and researchers want to
assess the impact of both pre-testing and the treatment
itself.
Random assignment is possible and ethical.
Considerations:
Useful in situations where the effect of pre-testing on
participants' behavior needs to be accounted for.
Provides a comprehensive analysis of the treatment's
impact while addressing potential biases introduced by
pre-testing.
39. 39
When to Use Each Design
Factorial Design:
Appropriate When:
Studying the interaction between multiple independent variables
is crucial.
Researchers want to assess the impact of two or more factors
simultaneously on the dependent variable.
Random assignment is possible and ethical.
Considerations:
Useful for exploring complex relationships between variables and
understanding how different factors influence outcomes.
Allows for the examination of main effects (independent variables
individually) and interaction effects (combined effects of multiple
variables).
41. 41
Non-Equivalent Design
Definition: A quasi-experimental design where two or
more groups are compared, but participants are not
randomly assigned. These groups are assumed to be
equivalent, but the lack of randomization can introduce
bias.
Statistics: ANCOVA
42. 42
Separate Sample Pre-test Post-test Design
Definition: A quasi-experimental design involving two
separate groups (experimental and control) where both groups
are measured before and after the intervention. The lack of
randomization can affect the internal validity of the study.
Statistics: ANCOVA
43. 43
Time Series Design
Definition: A quasi-experimental design in
which data is collected from the same group of
participants at multiple points in time before and
after an intervention. This design helps observe
changes in the dependent variable over time.
Statistics: Time series analysis involves
statistical methods such as Autoregressive
Integrated Moving Average (ARIMA) modeling,
Box-Jenkins modeling, or Fourier analysis.
These methods are used to analyze patterns,
trends, and seasonality in the data collected over
time.
44. 44
Conclusion
Control + Randomization = Validity + Reliability + Applicability
Pre-Experimental : Possible but not done
Quasi Experimental : Done Partially
True Experimental : Done All Possible
45. “
45
Thanks !
Some Books for better understanding
- Research in Education, by Best & Kahn
-Research Methods in Education, by Radha Mohan
-Introduction to Research Methodology in Education, by Hadler &
Sarkar.
-Research Design: Qualitative, Quantitative, and Mixed Methods
Approaches, by John W. Creswell and J. David Creswell
-Designing and Conducting Experiments in Social Science, by
Clifford J. Sherry
-Experimental Design and Analysis for Psychology, by Roger E.
Kirk