This is a North Central University course (EDR 8205-4) Week 4 Assignment: Analyze Experimental (Randomized) Designs. It is written in APA format, has been graded by an instructor (A), and includes references. Most higher-education assignments are submitted to turnitin, so remember to paraphrase. Let us begin.
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
EDR 8204-4: Experimental (randomized) design
1. Week 4 Assignment
Analyze Experimental (Randomized) Designs
log Teaching and Learning
Orlanda Haynes
North Central University
School of Education
Ed. D Student
EDR8205-4 - fall 2018
Copyright Note: All images used in this presentation have been confirmed to be either in the
public domain, of expired copyright status, licensed under the GNU Free Documentation
License, or using creative commons license
2. Experimental Randomized Design
• Experimenters administer
treatments to experimental units
(e.g., objects or research
subjects).
• Randomization is used to create
treatment and control groups.
• This process creates an
experimental randomized design,
also known as a completely
randomized design.
5. Experimental Randomized Design
Random Selection
• Identifies how experimenters select
sample subjects from the target
population
• Usually requires some form of
stratification
• Relies on probability sampling methods
• Can be used for generalization purpose
• Establishes statistical tests of
significance
Random Assignment
• Researchers assign participants to the
treatment or the control group after
completion of the selection process
• Experimenters can use either one or
both; it primarily depends on the aim of
the research.
8. Experimental Randomized Design
Sampling Methods
Random
• All members of the target
population have an equal chance of
being selected.
• It includes random selection or
random assignment or both.
• It is the primary method for
generalizing to the larger
population.
11. Analyze Experimental (Randomized) Designs
Strengths
• Flexibility allows for treatments
and replications
• Non-complex statistical analysis
• Reduces or remove internal validity
• Allows for more experimental and
treatment errors than similar
designs
Weaknesses
• Lack of restrictions which
allows environmental
variation (e.g., experimental
error)
• Increases variations because
large amount of
experimental materials are
needed
13. References
Bernard, H. R., & Bernard, H. R. (2012). Social research methods: Qualitative and
quantitative approaches. Thousand Oaks, CA: Sage.
Black, T. (2012). Doing quantitative research in the social sciences: An integrated
approach to research design, measurement, and statistics. Thousand Oaks, CA:
Sage Publications.
Campbell, D. T., & Stanley, J. C. (2015). Experimental and quasi-experimental
designs for research. Thousand Oaks, CA: Ravenio Books.
14. References
Creswell, J.W. (2015). Educational research: Planning, conducting, and
evaluating quantitative and qualitative research (5th Ed.). Boston, MA:
Pearson.
Çaparlar, C. Ö., & Dönmez, A. (2016). What is Scientific Research and
How Can it be Done? Turkish Journal of Anaesthesiology and
Reanimation, 44(4), 212–218. http://doi.org/10.5152/TJAR.2016.34711
McLeod, S. A. (2017). Qualitative vs. quantitative. Retrieved from
www.simplypsychology.org/qualitative-quantitative.html
15. References
Lodico, M., Spaulding, D., & Voegtle, K. (2010). Methods in
educational research: From theory to practice (Laureate Education, Inc.,
custom ed.). San Francisco: John Wiley &Sons.
Editor's Notes
[Read out loud].
Hello, everyone! Thanks for joining the discussion on random selection and random assignment. I’m Orlanda Haynes. A primary component of experimental research is random sampling. This presentation includes an overview of
differences between random selection and random assignment;
how these strategies contribute to the reduction of uncertainties, errors, bias, and internal and external validities; and
three sampling methods and the advantages and disadvantages of each.
Brochures, which include a references list, are located at the refreshment booth, and a session for questions, thoughts, and comments will follow the discussion. Let us begin with a definition of experimental randomized designs.
[Read slide note]
Researchers employ experiments to identify or statistically estimate causal factors; randomization is the process by which research subjects are assigned to experimental groups (e.g., treatment and control). In that, experimenters identify sample participants through random selection followed by random assignment which identifies which sample participant will join which group.
This process ensures that all subjects have an equal chance of being selected and assigned. Experimenters usually use computer programs for randomization and forms of stratification of subjects (e.g., geographic or demographic) prior to random assignment. Doing so reduces confounding factors or alternative explanations for outcome effects. All of which allows for the greatest measures of reliability and validity of treatment effects (Black, 2012; Campbell, & Stanley, 2015; Lodico, Spaulding, & Voegtle, 2010).
[Read slide note].
As shown, complete randomization allows for random selection and assignment. To maximize validity, however, some researchers could use block designs. If differences such as age exist, for example, within groups, and researchers believe the variable could negatively affect outcomes, they could divide the groups into homogeneous blocks (e.g., age groups such as 30-60 years old and under) prior to random assignment, thus creating a randomized block design (Campbell, & Stanley, 2015; Lodico, Spaulding, & Voegtle, 2010).
[Read slide note].
As this slide shows, a randomized block strategy is highly useful in experimental designs. Since, however, random selection and random assignment are sometime used interchangeably even though their meanings differ (Çaparlar, & Dönmez, 2016; McLeod, 2017)
, let us take a closer review with the next slide.
[Read slide content and slide notes].
To use both sampling strategies, for example, using a sampling of 100 participants (students), researchers could draw 50 names for the treatment group and the remaining 50 for the control group. On the other hand, to use only random assignment, experimenters could ask a teacher to select 50 students, and then randomly assign the students to the treatment and control groups. However, they could only draw inferences about the effect of the intervention but not that the effect would generalize to the overall population (Lodico, Spaulding, & Voegtle, 2010). Therefore, random selection is essential to control for external validity as well as the extent to which findings can be generalized to the larger population. Random assignment, therefore, is paramount to control for internal validity, which allows experimenters to make causation claims about the treatment effects (Campbell, & Stanley, 2015; Lodico, Spaulding, & Voegtle, 2010).
[Read slide and notes].
In research language, bias refers to the intentional or unintentional influence that experimenters may have on sample participants or some aspects of the study (Lodico, Spaulding, & Voegtle, 2010). However, some forms of bias exist in all research frameworks; therefore, it is more relevant to ponder “to what extent bias influences research results” (Campbell, & Stanley, 2015; Lodico, Spaulding, & Voegtle, 2010). For instance, sampling bias occurs when sample participants do not represent the larger population (e.g., there is more females in the sample about the use of birth control than males; this bias may influence the study’s results). Selection bias can occur, for example, if research subjects are not equivalent and randomly assigned to groups or if all members of the target population do not have an equal change of to be selected.
Response bias is quite interesting because it occurs when only particular characteristic of individual respond to survey invitations. For example, if the survey is about the ethical use of birth control and only birth control users respond. The sample group, therefore, if not representative of the larger population (Bernard, & Bernard, 2012; Lodico, Spaulding, & Voegtle, 2010).
Performance and measurement bias refer to phenomena that deal with how the intervention or treatment was administered (e.g., participants or the research staff behaviors are not natural due to test environments) and if the research staff knows which sample participants belong to which groups (Bernard, & Bernard, 2012).
[Read slide and notes].
These errors may not only effect research outcomes but also, they could pose threats to research validity (Bernard, & Bernard, 2012). For example, if taking part in research studies interferes with some subjects’ eating habits or normal behaviors then the effect would be referred to as the “Hawthorne effect.” On the other hand, if sample participants believe that after taking a tablet (e.g., experimental treatment group) a significant behavior change occurred then researchers could contribute the results to the “Placebo” effect because the change was not due to the independent variable (Lodico, Spaulding, & Voegtle, 2010).
Likewise, if research subjects know that they are, for example, in the control group then they may change their normal response by trying harder to please researchers or for some other reason. This behavior is known as the “John Henry Effect.” It is a threat to internal validity. The “Rating Effect” is equally relevant because it refers to the nature of rating research subject, which experimenters could subjectively rate higher or lower (Bernard, & Bernard, 2012).
[Read slide and notes].
The sampling method eliminates or reduces sampling bias, but the primary disadvantage is that it is time-consuming and more costly than other methods (Lodico, Spaulding, & Voegtle, 2010).
[Read slide and notes].
However, the sampling design is usually time-consuming and complex, but results could be used for generalization purpose. If researchers wanted to know, for example, what subjects cost undergraduates the most in terms of textbooks, then they would need to figure out the relative percentage of each subject taught at the college or university (e.g., engineering 6%, Social Sciences 24%, Medicine 3%, etc.). The sample should have equal proportions of these groups as indicative of the target population (college or university). As this example shows, the overall task could become time-consuming and more costly than other sample approaches (Campbell, & Stanley, 2015; Lodico, Spaulding, & Voegtle, 2010).
[read slide and notes].
As the slide notes, the key word here is “convenience.” Researchers invite sample participants from target populations based primarily on their availability and wiliness to take part in the study (Campbell, & Stanley, 2015; Lodico, Spaulding, & Voegtle, 2010). For example, experimenters invited 20 high school seniors from a rural high school in Kentucky to take-part in an experiment that investigated the age at which minors begin smoking. Results could be used, among other factors, to fill gaps in literature, to lend credence to education policy development or changes, and to the general public (Lodico, Spaulding, & Voegtle, 2010).
[Read out loud].
Randomized Designs are the simplest way to address sampling bias, in part, because all sample participants (of the target population) have an equal chance of being selected and assigned to the treatment or the control groups. Therefore, researchers employ the design (where possible) to show cause and effect factors (Lodico, Spaulding, & Voegtle, 2010).
[Read aloud].
This concludes the presentation. Let’s take a few moments for questions, thoughts, or comments.