Experimental research uses random assignment to groups to demonstrate cause-and-effect relationships. It requires an independent variable that can be manipulated and a dependent variable that can be measured in all groups. Common experimental designs include single-group pretest-posttest, two-group posttest-only, and Solomon four-group designs. Quasi-experimental research lacks random assignment but otherwise follows a similar process to assess causal effects. Both are subject to threats to internal and external validity.
Experimental method of Educational Research.Neha Deo
experimental method is the most challenging method of the Educational research. In the experimental method different functional & factorial designs can be used. One has to think over the internal & external validity of the experiment also.In this presentation all these things are discussed in details.
Experimental method of Educational Research.Neha Deo
experimental method is the most challenging method of the Educational research. In the experimental method different functional & factorial designs can be used. One has to think over the internal & external validity of the experiment also.In this presentation all these things are discussed in details.
This presentation is for educational purpose only. I do not own the rights to written material or pictures or illustrations used.
This is being uploaded for students who are in search of, or trying to understand how a quasi-experimental research design should look like.
An experimental research design helps researchers execute their research objectives with more clarity and transparency.Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables.The best example of experimental research methods is quantitative research.
Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.
Chapter 10
Data Interpretation Issues
Learning Objectives
• Distinguish between random and
systematic errors
• State and describe sources of bias
• Identify techniques to reduce bias at the
design and analysis phases of a study
• Define what is meant by the term
confounding and provide three examples
• Describe methods to control confounding
Validity of Study Designs
• The degree to which the inference drawn
from a study, is warranted when account it
taken of the study, methods, the
representativeness of the study sample,
and the nature of the population from
which it is drawn.
Validity of Study Designs
• Two components of validity:
– Internal validity
– External validity
Internal Validity
• A study is said to have internal validity
when there have been proper selection of
study groups and a lack of error in
measurement.
• Concerned with the appropriate
measurement of exposure, outcome, and
association between exposure and
disease.
External Validity
• External validity implies the ability to
generalize beyond a set of observations to
some universal statement.
• A study is externally valid, or
generalizable, if it allows unbiased
inferences regarding some other target
population beyond the subjects in the
study.
Sources of Error in
Epidemiologic Research
• Random errors
• Systematic errors (bias)
Random Errors
• Reflect fluctuations around a true value of
a parameter because of sampling
variability.
Factors That Contribute to
Random Error
• Poor precision
• Sampling error
• Variability in measurement
Poor Precision
• Occurs when the factor being measured is
not measured sharply.
• Analogous to aiming a rifle at a target that
is not in focus.
• Precision can be increased by increasing
sample size or the number of
measurements.
• Example: Bogalusa Heart Study
Sampling Error
• Arises when obtained sample values
(statistics) differ from the values
(parameters) of the parent population.
• Although there is no way to prevent a
non-representative sample from
occurring, increasing the sample size
can reduce the likelihood of its
happening.
Variability in Measurement
• The lack of agreement in results from
time to time reflects random error
inherent in the type of measurement
procedure employed.
Bias (Systematic Errors)
• “Deviation of results or inferences
from the truth, or processes leading to
such deviation. Any trend in the
collection, analysis, interpretation,
publication, or review of data that can
lead to conclusions that are
systematically different from the
truth.”
Factors That Contribute to
Systematic Errors
• Selection bias
• Information bias
• Confounding
Selection Bias
• Refers to distortions that result from procedures
used to select subjects and from factors that
influence participation in the study.
• Arises when the relation between exposure and
disease is different for th ...
This presentation is for educational purpose only. I do not own the rights to written material or pictures or illustrations used.
This is being uploaded for students who are in search of, or trying to understand how a quasi-experimental research design should look like.
An experimental research design helps researchers execute their research objectives with more clarity and transparency.Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables.The best example of experimental research methods is quantitative research.
Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.
Chapter 10
Data Interpretation Issues
Learning Objectives
• Distinguish between random and
systematic errors
• State and describe sources of bias
• Identify techniques to reduce bias at the
design and analysis phases of a study
• Define what is meant by the term
confounding and provide three examples
• Describe methods to control confounding
Validity of Study Designs
• The degree to which the inference drawn
from a study, is warranted when account it
taken of the study, methods, the
representativeness of the study sample,
and the nature of the population from
which it is drawn.
Validity of Study Designs
• Two components of validity:
– Internal validity
– External validity
Internal Validity
• A study is said to have internal validity
when there have been proper selection of
study groups and a lack of error in
measurement.
• Concerned with the appropriate
measurement of exposure, outcome, and
association between exposure and
disease.
External Validity
• External validity implies the ability to
generalize beyond a set of observations to
some universal statement.
• A study is externally valid, or
generalizable, if it allows unbiased
inferences regarding some other target
population beyond the subjects in the
study.
Sources of Error in
Epidemiologic Research
• Random errors
• Systematic errors (bias)
Random Errors
• Reflect fluctuations around a true value of
a parameter because of sampling
variability.
Factors That Contribute to
Random Error
• Poor precision
• Sampling error
• Variability in measurement
Poor Precision
• Occurs when the factor being measured is
not measured sharply.
• Analogous to aiming a rifle at a target that
is not in focus.
• Precision can be increased by increasing
sample size or the number of
measurements.
• Example: Bogalusa Heart Study
Sampling Error
• Arises when obtained sample values
(statistics) differ from the values
(parameters) of the parent population.
• Although there is no way to prevent a
non-representative sample from
occurring, increasing the sample size
can reduce the likelihood of its
happening.
Variability in Measurement
• The lack of agreement in results from
time to time reflects random error
inherent in the type of measurement
procedure employed.
Bias (Systematic Errors)
• “Deviation of results or inferences
from the truth, or processes leading to
such deviation. Any trend in the
collection, analysis, interpretation,
publication, or review of data that can
lead to conclusions that are
systematically different from the
truth.”
Factors That Contribute to
Systematic Errors
• Selection bias
• Information bias
• Confounding
Selection Bias
• Refers to distortions that result from procedures
used to select subjects and from factors that
influence participation in the study.
• Arises when the relation between exposure and
disease is different for th ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
2. Experimental Research
• Can demonstrate cause-and-effect very convincingly
• Very stringent research design requirements
• Experimental design requires:
» Random assignment to groups (experimental and
control)
» Independent treatment variable that can be applied to
the experimental group
» Dependent variable that can be measured in all groups
3. Fundamentals of Experimental and Quasi-
Experimental Research
• Random selection and random assignment :
» Distinguish between “selection” and “assignment”
» Random selection helps to assure population validity
» If you incorporate random assignment
Experimental research
» If you do not use random assignment
Quasi-experimental research
4. Fundamentals of Experimental and Quasi-
Experimental Research (cont’d.)
• When to use experimental research design :
» If you strongly suspect a cause-and-effect relationship
exists between two conditions, and
» The independent variable can be introduced to
participants and can be manipulated, and
» The resulting dependent variable can be measured for
all participants
5. Internal and External Validity
• “Validity of research” refers to the degree to which the
conclusions are accurate and generalizable
• Both experimental and quasi-experimental research are
subject to threats to validity
• If threats are not controlled for, they may introduce error
into the study, which will lead to misleading conclusions
6. Threats to External Validity
• External validity—extent to which the results can be
generalized to other groups or settings
» Population validity—degree of similarity among
sample used, population from which it came, and target
population
» Ecological validity—physical or emotional situation or
setting that may have been unique to the experiment
» If the treatment effects can be obtained only under a limited set
of conditions or only by the original researcher the findings
have low ecological validity.
7. Threats to External Validity
• Selection bias.
– If sample is biased you cannot generalize to the
population.
• Reactive effects.
– Experimental setting.
• Differs from natural setting.
– Testing.
• Pretest influences how subjects respond to the treatment.
• Multiple-treatment inference.
– If the subjects are exposed to more than one
treatment, then the findings could only be generalized
to individuals exposed to the same treatments in the
same order of presentation.
8. Threats to Internal Validity
• Internal validity—extent to which differences on the
dependent variable are a direct result of the manipulation
of the independent variable
» History—when factors other than treatment can exert influence
over the results; problematic over time
» Maturation—when changes occur in dependent variable that may
be due to natural developmental changes; problematic over time
» Testing—pretest may give clues to treatment or posttest and may
result in improved posttest scores
» Instrumentation – Nature of outcome measure has changed.
9. Threats to Internal Validity (cont’d.)
» Regression – Tendency of extreme scores to be nearer
to the mean at retest
» Differential selection of participants—participants are
not selected/assigned randomly
» Attrition (mortality)—loss of participants
» Experimental treatment diffusion – Control conditions
receive experimental treatment.
10. Experimental and Quasi-Experimental
Research Designs
• Commonly used experimental design notation :
» X1 = treatment group
» X2 = control/comparison group
» O = observation (pretest, posttest, etc.)
» R = random assignment
11. Common Experimental Designs
• Single-group pretest-treatment-posttest design:
O X O
» Technically, a pre-experimental design (only one
group; therefore, no random assignment exists)
» Overall, a weak design
»Why?
12. Common Experimental Designs (cont’d.)
• Two-group treatment-posttest-only design:
R X1 O
R X2 O
» Here, we have random assignment to experimental,
control groups
» A better design, but still weak—cannot be sure that
groups were equivalent to begin with
13. Common Experimental Designs (cont’d.)
• Two-group pretest-treatment-posttest design:
R O X1 O
R O X2 O
» A substantially improved design—previously
identified errors have been reduced
14. Common Experimental Designs (cont’d.)
• Solomon four-group design:
R O X1 O
R O X2 O
R X1 O
R X2 O
» A much improved design—how??
» One serious drawback—requires twice as many
participants
15. Common Experimental Designs (cont’d.)
• Factorial designs:
R O X1 g1 O
R O X2 g1 O
R O X1 g2 O
R O X2 g2 O
» Incorporates two or more factors
» Enables researcher to detect differential differences
(effects apparent only on certain combinations of
levels of independent variables)
16. Common Experimental Designs (cont’d.)
• Single-participant measurement-treatment-measurement
designs:
O O O | X O X O | O O O
» Purpose is to monitor effects on one subject
» Results can be generalized only with great caution
17. Common Quasi-Experimental Designs
• Posttest-only design with nonequivalent groups:
X1 O
X2 O
» Uses two groups from same population
» Questions must be addressed regarding equivalency of
groups prior to introduction of treatment
18. Common Quasi-Experimental Designs
(cont’d.)
• Pretest-posttest design with nonequivalent groups:
O X1 O
O X2 O
» A stronger design—pretest may be used to establish
group equivalency
19. Similarities Between Experimental and
Quasi-Experimental Research
• Cause-and-effect relationship is hypothesized
• Participants are randomly assigned (experimental) or
nonrandomly assigned (quasi-experimental)
• Application of an experimental treatment by researcher
• Following the treatment, all participants are measured on
the dependent variable
• Data are usually quantitative and analyzed by looking for
significant differences on the dependent variable